Archives For barriers to entry

Diana L. Moss is President of the American Antitrust Institute

Innovation Competition in the Spotlight

Innovation is more and more in the spotlight as questions grow about concentration and declining competition in the U.S. economy. These questions come not only from advocates for more vigorous competition enforcement but also, increasingly, from those who adhere to the school of thought that consolidation tends to generate procompetitive efficiencies. On March 27th, the European Commission issued its decision approving the Dow-DuPont merger, subject to divestitures of DuPont’s global R&D agrichemical assets to preserve price and innovation competition.

Before we read too much into what the EU decision in Dow-DuPont means for merger review in the U.S., remember that agriculture differs markedly across regions. Europe uses very little genetically modified (or transgenic) seed, whereas row crop acreage in the U.S. is planted mostly with it. This cautions against drawing major implications of the EU’s decision across jurisdictions.

This post unpacks the mergers of Dow-DuPont and Monsanto-Bayer in the U.S. and what they mean for innovation competition.

A Troubled Landscape? Past Consolidation in Agricultural Biotechnology

If approved as proposed, the mergers of Dow-DuPont and Monsanto-Bayer would reduce the field of Big 6 agricultural biotechnology (ag-biotech) firms to the Big 4. This has raised concerns about potentially higher prices for traits, seeds, and agrichemicals, less choice, and less innovation. The two mergers would make a 3rd wave of consolidation in the industry since the mid-1980s, when transgenic technology first emerged. Past consolidation has materially affected the structure of the markets. This is particularly true in crop seed, where relative to other agricultural input sectors, the level of concentration (and increases in concentration) over time is the highest.

Growers and consumers feel the effects of these changes. Consumers pay attention to their choices at the grocery store, which have arguably diminished and for which they pay prices that have risen at rates in excess of inflation. And the states in which agriculture is a major economic activity worry about their growers and the prices they pay for transgenic seed, agrichemicals, and fertilizers. Farmers we spoke to note, for example, that weeds that are resistant to the herbicide Roundup have evolved over time, making it no longer as effective as it once was. Dependence on seed and chemical cropping systems with declining effectiveness (due to resistance) has been met by the industry with newer and more expensive traited seed and different agrichemicals. With consolidation, these alternatives have dwindled.

These are not frivolous concerns. Empirical evidence shows that “technology fees” on transgenic corn, soybean, and cotton seed make up a significant proportion of total seed costs. The USDA notes that the prices of farm inputs, led by crop seed, generally have risen faster over the last 20 years than the prices farmers have received for their commodities. Moreover, seed price increases have outpaced yield increases over time. And finally, the USDA has determined that increasing levels of concentration in agricultural input markets (including crop seed) are no longer generally associated with higher R&D or a permanent rise in R&D intensity.

Putting the Squeeze on Growers and Consumers

The “squeeze” on growers and consumers highlights the fact that ag-biotech innovation comes at an increasingly higher price – a price that many worry will increase if the Dow-DuPont and Monsanto-Bayer mergers go through. These concerns are magnified by the structure of the food supply chain where we see a lot of growers and consumers at either end but not a lot of competition in the middle. In the middle are the ag-biotech firms that innovate traits, seeds, and agrichemicals; food processors such as grain millers and meatpackers; food manufacturers; distributors; and retail grocers.

Almost every sector has been affected by significant consolidation over the last two decades, some of which has been blocked, but a lot of which has not. For example, U.S. antitrust enforcers stopped the mergers of beef packers JBS and National Beef and broadline food distributors Sysco and USFoods. But key mergers that many believed raised significant competitive concerns went through, including Tyson-Hillshire Brands (pork), ConAgra-Horizon Mills (flour), Monsanto-Delta & Pine Land (cotton), and Safeway-Albertsons (grocery).

Aside from concerns over price, quality, and innovation, consolidation in “hourglass” shaped supply chains raises other issues. For example, it is often motivated by incentives to bulk up to bargain more effectively vis-a-vis more powerful input suppliers or customers. As we have seen with health care providers and health insurers, mergers for this purpose can trigger further consolidation, creating a domino effect. A bottlenecked supply chain also decreases resiliency. With less competition, it is more exposed to exogenous shocks such as bioterrorism or food-borne disease. That’s a potential food security problem.

Innovation Competition and the Agricultural Biotechnology Mergers

The Dow-DuPont and Monsanto-Bayer merger proposals raise a number of issues. One is significant overlap in seed, likely to result in a duopoly in corn and soybeans and a dominant firm (Monsanto) in cotton. A second concern is that the mergers would create or enhance substantial vertical integration. Where some arguments for integration can carry weight in a Guidelines analysis, here there is economic evidence from soybeans and cotton indicating that prices tend to be higher under vertical integration than under cross-licensing arrangements.

Moreover, the “platforms” resulting from the mergers are likely to be engineered for the purpose of creating exclusive packages of traits, seeds, and agrichemicals that are less likely to interoperate with rival products. This could raise entry barriers for smaller innovators and reduce or cut off access to resources needed to compete effectively. Indeed, one farmer noted the constraint of being locked into a single traits-seeds-chemicals platform in a market with already limited competition “[I] can’t mix chemicals with other companies’ products to remedy Roundup resistance.”

A third concern raised by the mergers is the potential elimination of competition in innovation markets. The DOJ/FTC Horizontal Merger Guidelines (§6.4) note that a merger may diminish innovation competition through curtailment of “innovative efforts below the level that would prevail in the absence of the merger.” This is especially the case when the merging firms are each other’s close competitors (e.g., as in the DOJ’s case against Applied Materials and Tokyo Electron). Dow, DuPont, Monsanto, and Bayer are four of only six ag-biotech rivals.

Preserving Parallel Path R&D Pipelines

In contrast to arguments that the mergers would combine only complementary assets, the R&D pipelines for all four firms show overlaps in major areas of traits, seeds, and crop protection. This supports the notion that the R&D pipelines compete head-to-head for technology intended for commercialization in U.S. markets. Maintaining competition in R&D ensures incentives remain strong to continue existing and prospective product development programs. This is particularly true in industries like ag-biotech (and pharma) where R&D is risky, regulatory approvals take time, and commercial success depends on crop planning and switching costs.

Maintaining Pro-Competitive Incentives to Cross-License Traits

Perhaps more important is that innovation in ag-biotech depends on maintaining a field of rivals, each with strong pro-competitive incentives to collaborate to form new combined (i.e., “stacked”) trait profiles. Farmers benefit most when there are competing stacks to choose from. About 60% of all stacks on the market in 2009 were the result of joint venture cross-licensing collaborations across firms. And the traits innovated by Dow, DuPont, Monsanto, and Bayer account for over 80% of traits in those stacks. That these companies are important innovators is apparent in GM Crop Database data for genetic corn, soybean and cotton “events” approved in the U.S. From 1991-2014, for example, the four companies account for a significant proportion of innovation in important genetic events.

Competition maximizes the potential for numerous collaborations. It also minimizes incentives to refuse to license, to impose discriminatory restrictions in technology licensing agreements, or to tacitly “agree” not to compete. Such agreements could range from deciding which firms specialize in certain crops or traits, to devising market “rules,” such as cross-licensing terms and conditions. All of this points to the importance of maintaining multiple, parallel R&D pipelines, a notion that was central to the EU’s decision in Dow-DuPont.

Remedies or Not? Preserving Innovation Competition

The DOJ has permitted two major ag-biotech mergers in the last decade, Monsanto’s mergers with DeKalb (corn) and Delta & Pine Land (cotton). In crafting remedies in both cases, the DOJ recognized the importance of innovation markets by fashioning remedies that focused on licensing or divesting patented technologies. The proposed mergers of Dow-DuPont and Monsanto-Bayer appear to be a different animal. They would reduce an already small field of large, integrated competitors, raise competitive concerns that have more breadth and complexity than in previous mergers, and are superimposed on growing evidence that transgenic technology has come at a higher and higher a price.Add to this the fact that a viable buyer of any divestiture R&D asset would be difficult to find outside the Big 6. Such a buyer would need to be national, if not global, in scale and scope in order to compete effectively

Add to this the fact that a viable buyer of any divestiture R&D asset would be difficult to find outside the Big 6. Such a buyer would need to be national, if not global, in scale and scope in order to compete effectively post-merger. Lack of scale and scope in R&D, financing, marketing, and distribution would necessitate cobbling together a package of assets to create and potentially prop up a national competitor. While the EU managed to pull this off, it is unclear whether the fact pattern in the U.S. would support a similar outcome. What we do know is that past mergers in the food and agriculture space have squeezed growers and consumers. Unless adequately addressed, these mega-deals stand to squeeze them even more.

Shubha Ghosh is Crandall Melvin Professor of Law and Director of the Technology Commercialization Law Program at Syracuse University College of Law

How should patents be taken into consideration in merger analysis? When does the combining of patent portfolios lead to anticompetitive concerns? Two principles should guide these inquiries. First, as the Supreme Court held in its 2006 decision Independent Ink, ownership of a patent does not confer market power. This ruling came in the context of a tying claim, but it is generalizable. While ownership of a patent can provide advantages in the market, such as access to techniques that are more effective than what is available to a competitor or the ability to keep competitors from making desirable differentiations in existing products, ownership of a patent or patent portfolio does not per se confer market power. Competitors might have equally strong and broad patent portfolios. The power to limit price competition is possibly counterweighted by competition over technology and product quality.

A second principle about patents and markets, however, bespeaks more caution in antitrust analysis. Patents can create information problems while at the same time potentially resolving some externality problems arising from knowledge spillovers. Information problems arise because patents are not well-defined property rights with clear boundaries. While patents are granted to novel, nonobvious, useful, and concrete inventions (as opposed to abstract, disembodied ideas), it is far from clear when a patented invention is actually nonobvious. Patent rights extend to several possible embodiments of a novel, useful, and nonobvious conception. While in theory this problem could be solved by limiting patent rights to narrow embodiments, the net result would be increased uncertainty through patent thickets and divided ownership. Inventions do not come in readily discernible units or engineered metes and bounds (despite the rhetoric).

The information problems created by patents do not create traditional market power in the sense of having some control over the price charged to consumers, but they do impose costs on competitors that can give a patent owner some control over market entry and the market conditions confronting consumers. The Court’s perhaps sanguine decoupling of patents and market power in its 2006 decision has some valence in a market setting where patent rights are somewhat equally distributed among competitors. In such a setting, each firm faces the same uncertainties that arise from patents. However, if patent ownership is imbalanced among firms, competition authorities need to act with caution. The challenge is identifying an unbalanced patent position in the marketplace.

Mergers among patent-owning firms invite antitrust scrutiny for these reasons. Metrics of patent ownership focusing solely on the quantity of patents owned, adjusting for the number of claims, can offer a snapshot of ownership distribution. But patent numbers need to be connected to the costs of operating the firm. Patents can lower a firm’s costs, create a niche for a particular differentiated product, and give a firm a head start in the next generation of technologies. Mergers that lead to an increased concentration of patent ownership may raise eyebrows, but those that lead to significant increase in costs to competitors and create potential impediments to market entry require a response from competition authorities. This response could be a blocking of the merger or perhaps more practically, in most instances, a divestment of the patent portfolio through requirements of licensing. This last approach is particularly appropriate where the technologies at issue are analogous to standard essential patents in the standard setting with FRAND context.

Claims of synergies should, in many instances, be met with skepticism when the patent portfolios of the merging companies are combined. While the technologies may be complementary, yielding benefits that go beyond those arising from a cross-licensing arrangement, the integration of portfolios may serve to raise costs for potential rivals in the marketplace. These barriers to entry may arise even in the case of vertical integration when the firms internalize contracting costs for technology transfer through ownership. Vertical integration of patent portfolios may raise costs for rivals both at the manufacturing and the distribution levels.

These ideas are set forth as propositions to be tested, but also general policy guidance for merger review involving companies with substantial patent portfolios. The ChemChina-Syngenta merger perhaps opens up global markets, but may likely impose barriers for companies in the agriculture market. The Bayer-Monsanto and Dow-DuPont mergers have questionable synergies. Even if potential synergies, these projected benefits need to be weighed against the very identifiable sources for market foreclosure. While patents may not create market power per se, according to the Supreme Court, the potential for mischief should not be underestimated.

Levi A. Russell is Assistant Professor, Agricultural & Applied Economics, University of Georgia and a blogger at Farmer Hayek.

Though concentration seems to be an increasingly popular metric for discussing antitrust policy (a backward move in my opinion, given the theoretical work by Harold Demsetz and others many years ago in this area), contestability is still the standard for evaluating antitrust issues from an economic standpoint. Contestability theory, most closely associated with William Baumol, rests on three primary principles. A market is perfectly contestable if 1) new entrants are not at a cost disadvantage to incumbents, 2) there are no barriers to entry or exit, and 3) there are no sunk costs. In this post, I discuss these conditions in relation to recent mergers and acquisitions in the agricultural chemical and biotech industry.

Contestability is rightly understood as a spectrum. While no industry is perfectly contestable, we expect that markets in which barriers to entry or exit are low, sunk costs are low, and new entrants can easily produce at similar cost to incumbents would be more innovative and that prices would be closer to marginal costs than in other industries. Certainly the agricultural chemical and biotech space does not appear to be very contestable, given the conditions above. There are significant R&D costs associated with the creation of new chemistries and new seed traits. The production and distribution of these products are likely to be characterized by significant economies of scale. Thus, the three conditions listed above are not met, and indeed the industry seems to be characterized by very low contestability. We would expect, then, that these mergers and acquisitions would drive up the prices of the companies’ products, leading to higher monopoly profits. Indeed, one study conducted at Texas A&M University finds that, as a result of the Bayer-Monsanto acquisition and DuPont/Pioneer merger with Dow, corn, soybean, and cotton prices will rise by an estimated 2.3%, 1.9%, and 18.2%, respectively.

These estimates are certainly concerning, especially given the current state of the agricultural economy. As the authors of the Texas A&M study point out, these estimates provide a justification for antitrust authorities to examine the merger and acquisition cases further. However, our dependence on the contestability concept as it pertains to the real world should also be scrutinized. To do so, we can examine other industries in which, according to the standard model of contestability, we would expect to find high barriers to entry or exit, significant sunk costs, and significant cost disadvantages for incumbents.

This chart, assembled by the American Enterprise Institute using data from the Bureau of Labor Statistics, shows the changes in prices of several consumer goods and services from 1996 to 2016, compared with CPI inflation. Industries in which there are high barriers to entry or exit, significant sunk costs, and significant cost disadvantages for new entrants such as automobiles, wireless service, and TVs have seen their prices plummet relative to inflation over the 20 year period. There has also been significant product innovation in these industries over the time period.

Disallowing mergers or acquisitions that will create synergies that lead to further innovation or lower cost is not an improvement in economic efficiency. The transgenic seeds created by some of these companies have allowed farmers to use less-toxic pesticides, providing both private and public benefits. Thus, the higher prices projected by the A&M study might be justified on efficiency grounds. The R&D performed by these firms has led to new pesticide chemistries that have allowed farmers to deal with changes in the behavior of insect populations and will likely allow them to handle issues of pesticide resistance in plants and insects in the future.

What does the empirical evidence on trends in prices and the value of these agricultural firms’ innovations described above imply about contestability and its relation to antitrust enforcement? Contestability should be understood not as a static concept, but as a dynamic one. Competition, more broadly, is the constant striving to outdo competitors and to capture economic profit, not a set of conditions used to analyze a market via a snapshot in time. A proper understanding of competition as a dynamic concept leads us to the following conclusion: for a market to be contestable such that incumbents are incentivized to behave in a competitive manner, the cost advantages and barriers to entry or exit enjoyed by incumbents must be equal to or less than an entrepreneur’s expectation of economic profit associated with entry.  Thus, a commitment to property rights by antitrust courts and avoidance of excessive licensure, intellectual property, and economic regulation by the legislative and executive branches is sufficient from an economic perspective to ensure a reasonable degree of contestability in markets.

In my next post I will discuss a source of disruptive technology that will likely provide some competitive pressure on the firms in these mergers and acquisitions in the near future.

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

Last year, Microsoft’s new CEO, Satya Nadella, seemed to break with the company’s longstanding “complain instead of compete” strategy to acknowledge that:

We’re going to innovate with a challenger mindset…. We’re not coming at this as some incumbent.

Among the first items on his agenda? Treating competing platforms like opportunities for innovation and expansion rather than obstacles to be torn down by any means possible:

We are absolutely committed to making our applications run what most people describe as cross platform…. There is no holding back of anything.

Earlier this week, at its Build Developer Conference, Microsoft announced its most significant initiative yet to bring about this reality: code built into its Windows 10 OS that will enable Android and iOS developers to port apps into the Windows ecosystem more easily.

To make this possible… Windows phones “will include an Android subsystem” meant to play nice with the Java and C++ code developers have already crafted to run on a rival’s operating system…. iOS developers can compile their Objective C code right from Microsoft’s Visual Studio, and turn it into a full-fledged Windows 10 app.

Microsoft also announced that its new browser, rebranded as “Edge,” will run Chrome and Firefox extensions, and that its Office suite would enable a range of third-party services to integrate with Office on Windows, iOS, Android and Mac.

Consumers, developers and Microsoft itself should all benefit from the increased competition that these moves are certain to facilitate.

Most obviously, more consumers may be willing to switch to phones and tablets with the Windows 10 operating system if they can continue to enjoy the apps and extensions they’ve come to rely on when using Google and Apple products. As one commenter said of the move:

I left Windows phone due to the lack of apps. I love the OS though, so if this means all my favorite apps will be on the platform I’ll jump back onto the WP bandwagon in a heartbeat.

And developers should invest more in development when they can expect additional revenue from yet another platform running their apps and extensions, with minimal additional development required.

It’s win-win-win. Except perhaps for Microsoft’s lingering regulatory strategy to hobble Google.

That strategy is built primarily on antitrust claims, most recently rooted in arguments that consumers, developers and competitors alike are harmed by Google’s conduct around Android which, it is alleged, makes it difficult for OS makers (like Cyanogen) and app developers (like Microsoft Bing) to compete.

But Microsoft’s interoperability announcements (along with a host of other rapidly evolving market characteristics) actually serve to undermine the antitrust arguments that Microsoft, through groups like FairSearch and ICOMP, has largely been responsible for pushing in the EU against Google/Android.

The reality is that, with innovations like the one Microsoft announced this week, Microsoft, Google and Apple (and Samsung, Nokia, Tizen, Cyanogen…) are competing more vigorously on several fronts. Such competition is evidence of a vibrant marketplace that is simply not in need of antitrust intervention.

The supreme irony in this is that such a move represents a (further) nail in the coffin of the supposed “applications barrier to entry” that was central to the US DOJ’s antitrust suit against Microsoft and that factors into the contemporary Android antitrust arguments against Google.

Frankly, the argument was never very convincing. Absent unjustified and anticompetitive efforts to prop up such a barrier, the “applications barrier to entry” is just a synonym for “big.” Admittedly, the DC Court of Appeals in Microsoft was careful — far more careful than the district court — to locate specific, narrow conduct beyond the mere existence of the alleged barrier that it believed amounted to anticompetitive monopoly maintenance. But central to the imposition of liability was the finding that some of Microsoft’s conduct deterred application developers from effectively accessing other platforms, without procompetitive justification.

With the implementation of initiatives like the one Microsoft has now undertaken in Windows 10, however, it appears that such concerns regarding Google and mobile app developers are unsupportable.

Of greatest significance to the current Android-related accusations against Google, the appeals court in Microsoft also reversed the district court’s finding of liability based on tying, noting in particular that:

If OS vendors without market power also sell their software bundled with a browser, the natural inference is that sale of the items as a bundle serves consumer demand and that unbundled sale would not.

Of course this is exactly what Microsoft Windows Phone (which decidedly does not have market power) does, suggesting that the bundling of mobile OS’s with proprietary apps is procompetitive.

Similarly, in reviewing the eventual consent decree in Microsoft, the appeals court upheld the conditions that allowed the integration of OS and browser code, and rejected the plaintiff’s assertion that a prohibition on such technological commingling was required by law.

The appeals court praised the district court’s recognition that an appropriate remedy “must place paramount significance upon addressing the exclusionary effect of the commingling, rather than the mere conduct which gives rise to the effect,” as well as the district court’s acknowledgement that “it is not a proper task for the Court to undertake to redesign products.”  Said the appeals court, “addressing the applications barrier to entry in a manner likely to harm consumers is not self-evidently an appropriate way to remedy an antitrust violation.”

Today, claims that the integration of Google Mobile Services (GMS) into Google’s version of the Android OS is anticompetitive are misplaced for the same reason:

But making Android competitive with its tightly controlled competitors [e.g., Apple iOS and Windows Phone] requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

In fact, some commenters have even suggested that, by effectively making the OS more “open,” Microsoft’s new Windows 10 initiative might undermine the Windows experience in exactly this fashion:

As a Windows Phone developer, I think this could easily turn into a horrible idea…. [I]t might break the whole Windows user experience Microsoft has been building in the past few years. Modern UI design is a different approach from both Android and iOS. We risk having a very unhomogenic [sic] store with lots of apps using different design patterns, and Modern UI is in my opinion, one of the strongest points of Windows Phone.

But just because Microsoft may be willing to take this risk doesn’t mean that any sensible conception of competition law and economics should require Google (or anyone else) to do so, as well.

Most significantly, Microsoft’s recent announcement is further evidence that both technological and contractual innovations can (potentially — the initiative is too new to know its effect) transform competition, undermine static market definitions and weaken theories of anticompetitive harm.

When apps and their functionality are routinely built into some OS’s or set as defaults; when mobile apps are also available for the desktop and are seamlessly integrated to permit identical functions to be performed on multiple platforms; and when new form factors like Apple MacBook Air and Microsoft Surface blur the lines between mobile and desktop, traditional, static anticompetitive theories are out the window (no pun intended).

Of course, it’s always been possible for new entrants to overcome network effects and scale impediments by a range of means. Microsoft itself has in the past offered to pay app developers to write for its mobile platform. Similarly, it offers inducements to attract users to its Bing search engine and it has devised several creative mechanisms to overcome its claimed scale inferiority in search.

A further irony (and market complication) is that now some of these apps — the ones with network effects of their own — threaten in turn to challenge the reigning mobile operating systems, exactly as Netscape was purported to threaten Microsoft’s OS (and lead to its anticompetitive conduct) back in the day. Facebook, for example, now offers not only its core social media function, but also search, messaging, video calls, mobile payments, photo editing and sharing, and other functionality that compete with many of the core functions built into mobile OS’s.

But the desire by apps like Facebook to expand their networks by being on multiple platforms, and the desire by these platforms to offer popular apps in order to attract users, ensure that Facebook is ubiquitous, even without any antitrust intervention. As Timothy Bresnahan, Joe Orsini and Pai-Ling Yin demonstrate:

(1) The distribution of app attractiveness to consumers is skewed, with a small minority of apps drawing the vast majority of consumer demand. (2) Apps which are highly demanded on one platform tend also to be highly demanded on the other platform. (3) These highly demanded apps have a strong tendency to multihome, writing for both platforms. As a result, the presence or absence of apps offers little reason for consumers to choose a platform. A consumer can choose either platform and have access to the most attractive apps.

Of course, even before Microsoft’s announcement, cross-platform app development was common, and third-party platforms like Xamarin facilitated cross-platform development. As Daniel O’Connor noted last year:

Even if one ecosystem has a majority of the market share, software developers will release versions for different operating systems if it is cheap/easy enough to do so…. As [Torsten] Körber documents [here], building mobile applications is much easier and cheaper than building PC software. Therefore, it is more common for programmers to write programs for multiple OSes…. 73 percent of apps developers design apps for at least two different mobiles OSes, while 62 percent support 3 or more.

Whether Microsoft’s interoperability efforts prove to be “perfect” or not (and some commenters are skeptical), they seem destined to at least further decrease the cost of cross-platform development, thus reducing any “application barrier to entry” that might impede Microsoft’s ability to compete with its much larger rivals.

Moreover, one of the most interesting things about the announcement is that it will enable Android and iOS apps to run not only on Windows phones, but also on Windows computers. Some 1.3 billion PCs run Windows. Forget Windows’ tiny share of mobile phone OS’s; that massive potential PC market (of which Microsoft still has 91 percent) presents an enormous ready-made market for mobile app developers that won’t be ignored.

It also points up the increasing absurdity of compartmentalizing these markets for antitrust purposes. As the relevant distinctions between mobile and desktop markets break down, the idea of Google (or any other company) “leveraging its dominance” in one market to monopolize a “neighboring” or “related” market is increasingly unsustainable. As I wrote earlier this week:

Mobile and social media have transformed search, too…. This revolution has migrated to the computer, which has itself become “app-ified.” Now there are desktop apps and browser extensions that take users directly to Google competitors such as Kayak, eBay and Amazon, or that pull and present information from these sites.

In the end, intentionally or not, Microsoft is (again) undermining its own case. And it is doing so by innovating and competing — those Schumpeterian concepts that were always destined to undermine antitrust cases in the high-tech sector.

If we’re lucky, Microsoft’s new initiatives are the leading edge of a sea change for Microsoft — a different and welcome mindset built on competing in the marketplace rather than at regulators’ doors.

Earlier this week the New Jersey Assembly unanimously passed a bill to allow direct sales of Tesla cars in New Jersey. (H/T Marina Lao). The bill

Allows a manufacturer (“franchisor,” as defined in P.L.1985, c.361 (C.56:10-26 et seq.)) to directly buy from or sell to consumers a zero emission vehicle (ZEV) at a maximum of four locations in New Jersey.  In addition, the bill requires a manufacturer to own or operate at least one retail facility in New Jersey for the servicing of its vehicles. The manufacturer’s direct sale locations are not required to also serve as a retail service facility.

The bill amends current law to allow any ZEV manufacturer to directly or indirectly buy from and directly sell, offer to sell, or deal to a consumer a ZEV if the manufacturer was licensed by the New Jersey Motor Vehicle Commission (MVC) on or prior to January 1, 2014.  This bill provides that ZEVs may be directly sold by certain manufacturers, like Tesla Motors, and preempts any rule or regulation that restricts sales exclusively to franchised dealerships.  The provisions of the bill would not prevent a licensed franchisor from operating under an existing license issued by the MVC.

At first cut, it seems good that the legislature is responding to the lunacy of the Christie administration’s previous decision to enforce a rule prohibiting direct sales of automobiles in New Jersey. We have previously discussed that decision at length in previous posts here, here, here and here. And Thom and Mike have taken on a similar rule in their home state of Missouri here and here.

In response to New Jersey’s decision to prohibit direct sales, the International Center for Law & Economics organized an open letter to Governor Christie based in large part on Dan Crane’s writings on the topic here at TOTM and discussing the faulty economics of such a ban. The letter was signed by more than 70 law professors and economists.

But it turns out that the legislative response is nearly as bad as the underlying ban itself.

First, a quick recap.

In our letter we noted that

The Motor Vehicle Commission’s regulation was aimed specifically at stopping one company, Tesla Motors, from directly distributing its electric cars. But the regulation would apply equally to any other innovative manufacturer trying to bring a new automobile to market, as well. There is no justification on any rational economic or public policy grounds for such a restraint of commerce. Rather, the upshot of the regulation is to reduce competition in New Jersey’s automobile market for the benefit of its auto dealers and to the detriment of its consumers. It is protectionism for auto dealers, pure and simple.

While enforcement of the New Jersey ban was clearly aimed directly at Tesla, it has broader effects. And, of course, its underlying logic is simply indefensible, regardless of which particular manufacturer it affects. The letter explains at length the economics of retail distribution and the misguided, anti-consumer logic of the regulation, and concludes by noting that

In sum, we have not heard a single argument for a direct distribution ban that makes any sense. To the contrary, these arguments simply bolster our belief that the regulations in question are motivated by economic protectionism that favors dealers at the expense of consumers and innovative technologies. It is discouraging to see this ban being used to block a company that is bringing dynamic and environmentally friendly products to market. We strongly encourage you to repeal it, by new legislation if necessary.

Thus it seems heartening that the legislature did, indeed, take up our challenge to repeal the ban.

Except that, in doing so, the legislature managed to write a bill that reflects no understanding whatever of the underlying economic issues at stake. Instead, the legislative response appears largely to be the product of rent seeking,pure and simple, offering only a limited response to Tesla’s squeaky wheel (no pun intended) and leaving the core defects of the ban completely undisturbed.

Instead of acknowledging the underlying absurdity of the limit on direct sales, the bill keeps the ban in place and simply offers a limited exception for Tesla (or other zero emission cars). While the innovative and beneficial nature of Tesla’s cars was an additional reason to oppose banning their direct sale, the specific characteristics of the cars is a minor and ancillary reason to oppose the ban. But the New Jersey legislative response is all about the cars’ emissions characteristics, and in no way does it reflect an appreciation for the fundamental economic defects of the underlying rule.

Moreover, the bill permits direct sales at only four locations (why four? No good reason whatever — presumably it was a political compromise, never the stuff of economic reason) and requires Tesla to operate a service center for its cars in the state. In other words, the regulators are still arbitrarily dictating aspects of car manufacturers’ business organization from on high.

Even worse, however, the bill is constructed to be nothing more than a payoff for a specific firm’s lobbying efforts, thus ensuring that the next (non-zero-emission) Tesla to come along will have to undertake the same efforts to pander to the state.

Far from addressing the serious concerns with the direct sales ban, the bill just perpetuates the culture of political rent seeking such regulations create.

Perhaps it’s better than nothing. Certainly it’s better than nothing for Tesla. But overall, I’d say it’s about the worst possible sort of response, short of nothing.

Our TOTM colleague Dan Crane has written a few posts here over the past year or so about attempts by the automobile dealers lobby (and General Motors itself) to restrict the ability of Tesla Motors to sell its vehicles directly to consumers (see here, here and here). Following New Jersey’s adoption of an anti-Tesla direct distribution ban, more than 70 lawyers and economists–including yours truly and several here at TOTM–submitted an open letter to Gov. Chris Christie explaining why the ban is bad policy.

Now it seems my own state of Missouri is getting caught up in the auto dealers’ ploy to thwart pro-consumer innovation and competition. Legislation (HB1124) that was intended to simply update statutes governing the definition, licensing and use of off-road and utility vehicles got co-opted at the last minute in the state Senate. Language was inserted to redefine the term “franchisor” to include any automobile manufacturer, regardless whether they have any franchise agreements–in direct contradiction to the definition used throughout the rest of the surrounding statues. The bill defines a “franchisor” as:

“any manufacturer of new motor vehicles which establishes any business location or facility within the state of Missouri, when such facilities are used by the manufacturer to inform, entice, or otherwise market to potential customers, or where customer orders for the manufacturer’s new motor vehicles are placed, received, or processed, whether or not any sales of such vehicles are finally consummated, and whether or not any such vehicles are actually delivered to the retail customer, at such business location or facility.”

In other words, it defines a franchisor as a company that chooses to open it’s own facility and not franchise. The bill then goes on to define any facility or business location meeting the above criteria as a “new motor vehicle dealership,” even though no sales or even distribution may actually take place there. Since “franchisors” are already forbidden from owning a “new motor vehicle dealership” in Missouri (a dubious restriction in itself), these perverted definitions effectively ban a company like Tesla from selling directly to consumers.

The bill still needs to go back to the Missouri House of Representatives, where it started out as addressing “laws regarding ‘all-terrain vehicles,’ ‘recreational off-highway vehicles,’ and ‘utility vehicles’.”

This is classic rent-seeking regulation at its finest, using contrived and contorted legislation–not to mention last-minute, underhanded legislative tactics–to prevent competition and innovation that, as General Motors itself pointed out, is based on a more economically efficient model of distribution that benefits consumers. Hopefully the State House…or the Governor…won’t be asleep at the wheel as this legislation speeds through the final days of the session.