Archives For markets

Yesterday Learfield and IMG College inked their recently announced merger. Since the negotiations were made public several weeks ago, the deal has garnered some wild speculation and potentially negative attention. Now that the merger has been announced, it’s bound to attract even more attention and conjecture.

On the field of competition, however, the market realities that support the merger’s approval are compelling. And, more importantly, the features of this merger provide critical lessons on market definition, barriers to entry, and other aspects of antitrust law related to two-sided and advertising markets that can be applied to numerous matters vexing competition commentators.

First, some background

Learfield and IMG specialize in managing multimedia rights (MMRs) for intercollegiate sports. They are, in effect, classic advertising intermediaries, facilitating the monetization by colleges of radio broadcast advertising and billboard, program, and scoreboard space during games (among other things), and the purchase by advertisers of access to these valuable outlets.

Although these transactions can certainly be (and very often are) entered into by colleges and advertisers directly, firms like Learfield and IMG allow colleges to outsource the process — as one firm’s tag line puts it, “We Work | You Play.” Most important, by bringing multiple schools’ MMRs under one roof, these firms can reduce the transaction costs borne by advertisers in accessing multiple outlets as part of a broad-based marketing plan.

Media rights and branding are a notable source of revenue for collegiate athletic departments: on average, they account for about 3% of these revenues. While they tend to pale in comparison to TV rights, ticket sales, and fundraising, for major programs, MMRs may be the next most important revenue source after these.

Many collegiate programs retain some or all of their multimedia rights and use in-house resources to market them. In some cases schools license MMRs through their athletic conference. In other cases, schools ink deals to outsource their MMRs to third parties, such as Learfield, IMG, JMI Sports, Outfront Media, and Fox Sports, among several others. A few schools even use professional sports teams to manage their MMRs (the owner of the Red Sox manages Boston College’s MMRs, for example).

Schools switch among MMR managers with some regularity, and, in most cases apparently, not among the merging parties. Michigan State, for example, was well known for handling its MMRs in-house. But in 2016 the school entered into a 15-year deal with Fox Sports, estimated at minimum guaranteed $150 million. In 2014 Arizona State terminated its MMR deal with IMG and took it MMRs in-house. Then, in 2016, the Sun Devils entered into a first-of-its-kind arrangement with the Pac 12 in which the school manages and sells its own marketing and media rights while the conference handles core business functions for the sales and marketing team (like payroll, accounting, human resources, and employee benefits). The most successful new entrant on the block, JMI Sports, won Kentucky, Clemson, and the University of Pennsylvania from Learfield or IMG. Outfront Media was spun off from CBS in 2014 and has become one of the strongest MMR intermediary competitors, handling some of the biggest names in college sports, including LSU, Maryland, and Virginia. All told, eight recent national Division I champions are served by MMR managers other than IMG and Learfield.

The supposed problem

As noted above, the most obvious pro-competitive benefit of the merger is in the reduction in transaction costs for firms looking to advertise in multiple markets. But, in order to confer that benefit (which, of course, also benefits the schools, whose marketing properties become easier to access), that also means a dreaded increase in size, measured by number of schools’ MMRs managed. So is this cause for concern?

Jason Belzer, a professor at Rutgers University and founder of sports consulting firm, GAME, Inc., has said that the merger will create a juggernaut — yes, “a massive inexorable force… that crushes whatever is in its path” — that is likely to invite antitrust scrutiny. The New York Times opines that the deal will allow Learfield to “tighten its grip — for nearly total control — on this niche but robust market,” “surely” attracting antitrust scrutiny. But these assessments seem dramatically overblown, and insufficiently grounded in the dynamics of the market.

Belzer’s concerns seem to be merely the size of the merging parties — again, measured by the number of schools’ rights they manage — and speculation that the merger would bring to an end “any” opportunity for entry by a “major” competitor. These are misguided concerns.

To begin, the focus on the potential entry of a “major” competitor is an odd standard that ignores the actual and potential entry of many smaller competitors that are able to win some of the most prestigious and biggest schools. In fact, many in the industry argue — rightly — that there are few economies of scale for colleges. Most of these firms’ employees are dedicated to a particular school and those costs must be incurred for each school, no matter the number, and borne by new entrants and incumbents alike. That means a small firm can profitably compete in the same market as larger firms — even “juggernauts.” Indeed, every college that brings MMR management in-house is, in fact, an entrant — and there are some big schools in big conferences that manage their MMRs in-house.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to in-house MMR management indicate that no competitor has any measurable market power that can disadvantage schools or advertisers.

Indeed, from the perspective of the school, the true relevant market is no broader than each school’s own rights. Even after the merger there will be at least five significant firms competing for those rights, not to mention each school’s conference, new entrants, and the school itself.

The two-sided market that isn’t really two-sided

Standard antitrust analysis, of course, focuses on consumer benefits: Will the merger make consumers better off (or no worse off)? But too often casual antitrust analysis of two-sided markets trips up on identifying just who the consumer is — and what the relevant market is. For a shopping mall, is the consumer the retailer or the shopper? For newspapers and search engines, is the customer the advertiser or the reader? For intercollegiate sports multimedia rights licensing, is the consumer the college or the advertiser?

Media coverage of the anticipated IMG/Learfield merger largely ignores advertisers as consumers and focuses almost exclusively on the the schools’ relationship with intermediaries — as purchasers of marketing services, rather than sellers of advertising space.

Although it’s difficult to identify the source of this odd bias, it seems to be based on the notion that, while corporations like Coca-Cola and General Motors have some sort of countervailing market power against marketing intermediaries, universities don’t. With advertisers out of the picture, media coverage suggests that, somehow, schools may be worse off if the merger were to proceed. But missing from this assessment are two crucial facts that undermine the story: First, schools actually have enormous market power; and, second, schools compete in the business of MMR management.

This second factor suggests, in fact, that sometimes there may be nothing special about two-sided markets sufficient to give rise to a unique style of antitrust analysis.

Much of the antitrust confusion seems to be based on confusion over the behavior of two-sided markets. A two-sided market is one in which two sets of actors interact through an intermediary or platform, which, in turn, facilitates the transactions, often enabling transactions to take place that otherwise would be too expensive absent the platform. A shopping mall is a two-sided market where shoppers can find their preferred stores. Stores would operate without the platform, but perhaps not as many, and not as efficiently. Newspapers, search engines, and other online platforms are two-sided markets that bring together advertisers and eyeballs that might not otherwise find each other absent the platform. And a collegiate multimedia rights management firms is a two-sided market where colleges that want to sell advertising space get together with firms that want to advertise their goods and services.

Yet there is nothing particularly “transformative” about the outsourcing of MMR management. Credit cards, for example are qualitatively different than in-store credit operations. They are two-sided platforms that substitute for in-house operations — but they also create an entirely new product and product market. MMR marketing firms do lower some transaction costs and reduce risk for collegiate sports marketing, but the product is not substantially changed — in fact, schools must have the knowledge and personnel to assess and enter into the initial sale of MMRs to an intermediary and, because of ongoing revenue-sharing and coordination with the intermediary, must devote ongoing resources even after the initial sale.

But will a merged entity have “too much” power? Imagine if a single firm owned the MMRs for nearly all intercollegiate competitors. How would it be able to exercise its supposed market power? Because each deal is negotiated separately, and, other than some mundane, fixed back-office expenses, the costs of rights management must be incurred whether a firm negotiates one deal or 100, there are no substantial economies of scale in the purchasing of MMRs. As a result, the existence of deals with other schools won’t automatically translate into better deals with subsequent schools.

Now, imagine if one school retained its own MMRs, but decided it might want to license them to an intermediary. Does it face anticompetitive market conditions if there is only a single provider of such services? To begin with, there is never only a single provider, as each school can provide the services in-house. This is not even the traditional monopoly constraint of simply “not buying,” which makes up the textbook “deadweight loss” from monopoly: In this case “not buying” does not mean going without; it simply means providing for oneself.

More importantly, because the school has a monopoly on access to its own marketing rights (to say nothing of access to its own physical facilities) unless and until it licenses them, its own bargaining power is largely independent of an intermediary’s access to other schools’ rights. If it were otherwise, each school would face anticompetitive market conditions simply by virtue of other schools’ owning their own rights!

It is possible that a larger, older firm will have more expertise and will be better able to negotiate deals with other schools — i.e., it will reap the benefits of learning by doing. But the returns to learning by doing derive from the ability to offer higher-quality/lower-cost services over time — which are a source of economic benefit, not cost. At the same time, the bulk of the benefits of experience may be gained over time with even a single set of MMRs, given the ever-varying range of circumstances even a single school will create: There may be little additional benefit (and, to be sure, there is additional cost) from managing multiple schools’ MMRs. And whatever benefits specialized firms offer, they also come with agency costs, and an intermediary’s specialized knowledge about marketing MMRs may or may not outweigh a school’s own specialized knowledge about the nuances of its particular circumstances. Moreover, because of knowledge spillovers and employee turnover this marketing expertise is actually widely distributed; not surprisingly, JMI Sports’ MMR unit, one of the most recent and successful entrants into the business was started by a former employee of IMG. Several other firms started out the same way.

The right way to begin thinking about the issue is this: Imagine if MMR intermediaries didn’t exist — what would happen? In this case, the answer is readily apparent because, for a significant number of schools (about 37% of Division I schools, in fact) MMR licensing is handled in-house, without the use of intermediaries. These schools do, in fact, attract advertisers, and there is little indication that they earn less net profit for going it alone. Schools with larger audiences, better targeted to certain advertisers’ products, command higher prices. Each school enjoys an effective monopoly over advertising channels around its own games, and each has bargaining power derived from its particular attractiveness to particular advertisers.

In effect, each school faces a number of possible options for MMR monetization — most notably a) up-front contracting to an intermediary, which then absorbs the risk, expense, and possible up-side of ongoing licensing to advertisers, or b) direct, ongoing licensing to advertisers. The presence of the intermediary doesn’t appreciably change the market, nor the relative bargaining power of sellers (schools) and buyers (advertisers) of advertising space any more than the presence of temp firms transforms the fundamental relationship between employers and potential part-time employees.

In making their decisions, schools always have the option of taking their MMR management in-house. In facing competing bids from firms such as IMG or Learfield, from their own conferences, or from professional sports teams, the opening bid, in a sense, comes from the school itself. Even the biggest intermediary in the industry must offer the school a deal that is at least as good as managing the MMRs in-house.

The true relevant market: Advertising

According to economist Andy Schwarz, if the relevant market is “college-based marketing services to Power 5 schools, the antitrust authorities may have more concerns than if it’s marketing services in sports.” But this entirely misses the real market exchange here. Sure, marketing services are purchased by schools, but their value to the schools is independent of the number of other schools an intermediary also markets.

Advertisers always have the option of deploying their ad dollars elsewhere. If Coca-Cola wants to advertise on Auburn’s stadium video board, it’s because Auburn’s video board is a profitable outlet for advertising, not because the Auburn ads are bundled with advertising at dozens of other schools (although that bundling may reduce the total cost of advertising on Auburn’s scoreboard as well as other outlets). Similarly, Auburn is seeking the highest bidder for space on its video board. It does not matter to Auburn that the University of Georgia is using the same intermediary to sell ads on its stadium video board.

The willingness of purchasers — say, Coca-Cola or Toyota — to pay for collegiate multimedia advertising is a function of the school that licenses it (net transaction costs) — and MMR agents like IMG and Learfield commit substantial guaranteed sums and a share of any additional profits for the rights to sell that advertising: For example, IMG recently agreed to pay $150 million over 10 years to renew its MMR contract at UCLA. But this is the value of a particular, niche form of advertising, determined within the context of the broader advertising market. How much pricing power over scoreboard advertising does any university, or even any group of universities under the umbrella of an intermediary have, in a world in which Coke and Toyota can advertise virtually anywhere — including during commercial breaks in televised intercollegiate games, which are licensed separately from the MMRs licensed by companies like IMG and Learfield?

There is, in other words, a hard ceiling on what intermediaries can charge schools for MMR marketing services: The schools’ own cost of operating a comparable program in-house.

To be sure, for advertisers, large MMR marketing firms lower the transaction costs of buying advertising space across a range of schools, presumably increasing demand for intercollegiate sports advertising and sponsorship. But sponsors and advertisers have a wide range of options for spending their marketing dollars. Intercollegiate sports MMRs are a small slice of the sports advertising market, which, in turn, is a small slice of the total advertising market. Even if one were to incorrectly describe the combined entity as a “juggernaut” in intercollegiate sports, the MMR rights it sells would still be a flyspeck in the broader market of multimedia advertising.

According to one calculation (by MoffettNathanson), total ad spending in the U.S. was about $191 billion in 2016 (Pew Research Center estimates total ad revenue at $240 billion) and the global advertising market was estimated to be worth about $493 billion. The intercollegiate MMR segment represents a minuscule fraction of that. According to Jason Belzer, “[a]t the time of its sale to WME in 2013, IMG College’s yearly revenue was nearly $500 million….” Another source puts it at $375 million. Either way, it’s a fraction of one percent of the total market, and even combined with Learfield it will remain a minuscule fraction. Even if one were to define a far narrower sports sponsorship market, which a Price Waterhouse estimate puts at around $16 billion, the combined companies would still have a tiny market share.

As sellers of MMRs, colleges are competing with each other, professional sports such as the NFL and NBA, and with non-sports marketing opportunities. And it’s a huge and competitive market.

Barriers to entry

While capital requirements and the presence of long-term contracts may present challenges to potential entrants into the business of marketing MMRs, these potential entrants face virtually no barriers that are not, or have not been, faced by incumbent providers. In this context, one should keep in mind two factors. First, barriers to entry are properly defined as costs incurred by new entrants that are not incurred by incumbents (no matter what Joe Bain says; Stigler always wins this dispute…). Every firm must bear the cost of negotiating and managing each schools’ MMRs, and, as noted, these costs don’t vary significantly with the number of schools being managed. And every entrant needs approximately the same capital and human resources per similarly sized school as every incumbent. Thus, in this context, neither the need for capital nor dedicated employees is properly construed as a barrier to entry.

Second, as the DOJ and FTC acknowledge in the Horizontal Merger Guidelines, any merger can be lawful under the antitrust laws, no matter its market share, where there are no significant barriers to entry:

The prospect of entry into the relevant market will alleviate concerns about adverse competitive effects… if entry into the market is so easy that the merged firm and its remaining rivals in the market, either unilaterally or collectively, could not profitably raise price or otherwise reduce competition compared to the level that would prevail in the absence of the merger.

As noted, there are low economies of scale in the business, with most of the economies occurring in the relatively small “back office” work of payroll, accounting, human resources, and employee benefits. Since the 2000s, the entry of several significant competitors — many entering with only one or two schools or specializing in smaller or niche markets — strongly suggests that there are no economically important barriers to entry. And these firms have entered and succeeded with a wide range of business models and firm sizes:

  • JMI Sports — a “rising boutique firm” — hired Tom Stultz, the former senior vice president and managing director of IMG’s MMR business, in 2012. JMI won its first (and thus, at the time, only) MMR bid in 2014 at the University of Kentucky, besting IMG to win the deal.
  • Peak Sports MGMT, founded in 2012, is a small-scale MMR firm that focuses on lesser Division I and II schools in Texas and the Midwest. It manages just seven small properties, including Southland Conference schools like the University of Central Arkansas and Southeastern Louisiana University.
  • Fox Sports entered the business in 2008 with a deal with the University of Florida. It now handles MMRs for schools like Georgetown, Auburn, and Villanova. Fox’s entry suggests that other media companies — like ESPN — that may already own TV broadcast rights are also potential entrants.
  • In 2014 the sports advertising firm, Van Wagner, hired three former Nelligan employees to make a play for the college sports space. In 2015 the company won its first MMR bid at Florida International University, reportedly against seven other participants. It now handles more than a dozen schools including Georgia State (which it won from IMG), Loyola Marymount, Pepperdine, Stony Brook, and Santa Clara.
  • In 2001 Fenway Sports Group, parent company of the Boston Red Sox and Liverpool Football Club, entered into an MMR agreement with Boston College. And earlier this year the Tampa Bay Lightning hockey team began handling multimedia marketing for the University of South Florida.

Potential new entrants abound. Most obviously, sports networks like ESPN could readily follow Fox Sports’ lead and advertising firms could follow Van Wagner’s. These companies have existing relationships and expertise that position them for easy entry into the MMR business. Moreover, there are already several companies that handle the trademark licensing for schools, any of which could move into the MMR management business, as well; both IMG and Learfield already handle licensing for a number of schools. Most notably, Fermata Partners, founded in 2012 by former IMG employees and acquired in 2015 by CAA Sports (a division of Creative Artists Agency), has trademark licensing agreements with Georgia, Kentucky, Miami, Notre Dame, Oregon, Virginia, and Wisconsin. It could easily expand into selling MMR rights for these and other schools. Other licensing firms like Exemplar (which handles licensing at Columbia) and 289c (which handles licensing at Texas and Ohio State) could also easily expand into MMR.

Given the relatively trivial economies of scale, the minimum viable scale for a new entrant appears to be approximately one school — a size that each school’s in-house operations, of course, automatically meets. Moreover, the Peak Sports, Fenway, and Tampa Bay Lightning examples suggest that there may be particular benefits to local, regional, or category specialization, suggesting that innovative, new entry is not only possible, but even likely, as the business continues to evolve.

Conclusion

A merger between IMG and Learfield should not raise any antitrust issues. College sports is a small slice of the total advertising market. Even a so-called “juggernaut” in college sports multimedia rights is a small bit in the broader market of multimedia marketing.

The demonstrated entry of new competitors and the transitions of schools from one provider to another or to bringing MMR management in-house, indicates that no competitor has any measurable market power that can disadvantage schools or advertisers.

The term “juggernaut” entered the English language because of misinterpretation and exaggeration of actual events. Fears of the IMG/Learfield merger crushing competition is similarly based on a misinterpretation of two-sided markets and misunderstanding of the reality of the of the market for college multimedia rights management. Importantly, the case is also a cautionary tale for those who would identify narrow, contract-, channel-, or platform-specific relevant markets in circumstances where a range of intermediaries and direct relationships can compete to offer the same service as those being scrutinized. Antitrust advocates have a long and inglorious history of defining markets by channels of distribution or other convenient, yet often economically inappropriate, combinations of firms or products. Yet the presence of marketing or other intermediaries does not automatically transform a basic, commercial relationship into a novel, two-sided market necessitating narrow market definitions and creative economics.

In a recent post at the (appallingly misnamed) ProMarket blog (the blog of the Stigler Center at the University of Chicago Booth School of Business — George Stigler is rolling in his grave…), Marshall Steinbaum keeps alive the hipster-antitrust assertion that lax antitrust enforcement — this time in the labor market — is to blame for… well, most? all? of what’s wrong with “the labor market and the broader macroeconomic conditions” in the country.

In this entry, Steinbaum takes particular aim at the US enforcement agencies, which he claims do not consider monopsony power in merger review (and other antitrust enforcement actions) because their current consumer welfare framework somehow doesn’t recognize monopsony as a possible harm.

This will probably come as news to the agencies themselves, whose Horizontal Merger Guidelines devote an entire (albeit brief) section (section 12) to monopsony, noting that:

Mergers of competing buyers can enhance market power on the buying side of the market, just as mergers of competing sellers can enhance market power on the selling side of the market. Buyer market power is sometimes called “monopsony power.”

* * *

Market power on the buying side of the market is not a significant concern if suppliers have numerous attractive outlets for their goods or services. However, when that is not the case, the Agencies may conclude that the merger of competing buyers is likely to lessen competition in a manner harmful to sellers.

Steinbaum fails to mention the HMGs, but he does point to a US submission to the OECD to make his point. In that document, the agencies state that

The U.S. Federal Trade Commission (“FTC”) and the Antitrust Division of the Department of Justice (“DOJ”) [] do not consider employment or other non-competition factors in their antitrust analysis. The antitrust agencies have learned that, while such considerations “may be appropriate policy objectives and worthy goals overall… integrating their consideration into a competition analysis… can lead to poor outcomes to the detriment of both businesses and consumers.” Instead, the antitrust agencies focus on ensuring robust competition that benefits consumers and leave other policies such as employment to other parts of government that may be specifically charged with or better placed to consider such objectives.

Steinbaum, of course, cites only the first sentence. And he uses it as a launching-off point to attack the notion that antitrust is an improper tool for labor market regulation. But if he had just read a little bit further in the (very short) document he cites, Steinbaum might have discovered that the US antitrust agencies have, in fact, challenged the exercise of collusive monopsony power in labor markets. As footnote 19 of the OECD submission notes:

Although employment is not a relevant policy goal in antitrust analysis, anticompetitive conduct affecting terms of employment can violate the Sherman Act. See, e.g., DOJ settlement with eBay Inc. that prevents the company from entering into or maintaining agreements with other companies that restrain employee recruiting or hiring; FTC settlement with ski equipment manufacturers settling charges that companies illegally agreed not to compete for one another’s ski endorsers or employees. (Emphasis added).

And, ironically, while asserting that labor market collusion doesn’t matter to the agencies, Steinbaum himself points to “the Justice Department’s 2010 lawsuit against Silicon Valley employers for colluding not to hire one another’s programmers.”

Steinbaum instead opts for a willful misreading of the first sentence of the OECD submission. But what the OECD document refers to, of course, are situations where two firms merge, no market power is created (either in input or output markets), but people are laid off because the merged firm does not need all of, say, the IT and human resources employees previously employed in the pre-merger world.

Does Steinbaum really think this is grounds for challenging the merger on antitrust grounds?

Actually, his post suggests that he does indeed think so, although he doesn’t come right out and say it. What he does say — as he must in order to bring antitrust enforcement to bear on the low- and unskilled labor markets (e.g., burger flippers; retail cashiers; Uber drivers) he purports to care most about — is that:

Employers can have that control [over employees, as opposed to independent contractors] without first establishing themselves as a monopoly—in fact, reclassification [of workers as independent contractors] is increasingly standard operating procedure in many industries, which means that treating it as a violation of Section 2 of the Sherman Act should not require that outright monopolization must first be shown. (Emphasis added).

Honestly, I don’t have any idea what he means. Somehow, because firms hire independent contractors where at one time long ago they might have hired employees… they engage in Sherman Act violations, even if they don’t have market power? Huh?

I get why he needs to try to make this move: As I intimated above, there is probably not a single firm in the world that hires low- or unskilled workers that has anything approaching monopsony power in those labor markets. Even Uber, the example he uses, has nothing like monopsony power, unless perhaps you define the market (completely improperly) as “drivers already working for Uber.” Even then Uber doesn’t have monopsony power: There can be no (or, at best, virtually no) markets in the world where an Uber driver has no other potential employment opportunities but working for Uber.

Moreover, how on earth is hiring independent contractors evidence of anticompetitive behavior? ”Reclassification” is not, in fact, “standard operating procedure.” It is the case that in many industries firms (unilaterally) often decide to contract out the hiring of low- and unskilled workers over whom they do not need to exercise direct oversight to specialized firms, thus not employing those workers directly. That isn’t “reclassification” of existing workers who have no choice but to accept their employer’s terms; it’s a long-term evolution of the economy toward specialization, enabled in part by technology.

And if we’re really concerned about what “employee” and “independent contractor” mean for workers and employment regulation, we should reconsider those outdated categories. Firms are faced with a binary choice: hire workers or independent contractors. Neither really fits many of today’s employment arrangements very well, but that’s the choice firms are given. That they sometimes choose “independent worker” over “employee” is hardly evidence of anticompetitive conduct meriting antitrust enforcement.

The point is: The notion that any of this is evidence of monopsony power, or that the antitrust enforcement agencies don’t care about monopsony power — because, Bork! — is absurd.

Even more absurd is the notion that the antitrust laws should be used to effect Steinbaum’s preferred market regulations — independent of proof of actual anticompetitive effect. I get that it’s hard to convince Congress to pass the precise laws you want all the time. But simply routing around Congress and using the antitrust statutes as a sort of meta-legislation to enact whatever happens to be Marshall Steinbaum’s preferred regulation du jour is ridiculous.

Which is a point the OECD submission made (again, if only Steinbaum had read beyond the first sentence…):

[T]wo difficulties with expanding the scope of antitrust analysis to include employment concerns warrant discussion. First, a full accounting of employment effects would require consideration of short-term effects, such as likely layoffs by the merged firm, but also long-term effects, which could include employment gains elsewhere in the industry or in the economy arising from efficiencies generated by the merger. Measuring these effects would [be extremely difficult.]. Second, unless a clear policy spelling out how the antitrust agency would assess the appropriate weight to give employment effects in relation to the proposed conduct or transaction’s procompetitive and anticompetitive effects could be developed, [such enforcement would be deeply problematic, and essentially arbitrary].

To be sure, the agencies don’t recognize enough that they already face the problem of reconciling multidimensional effects — e.g., short-, medium-, and long-term price effects, innovation effects, product quality effects, etc. But there is no reason to exacerbate the problem by asking them to also consider employment effects. Especially not in Steinbaum’s world in which certain employment effects are problematic even without evidence of market power or even actual anticompetitive harm, just because he says so.

Consider how this might play out:

Suppose that Pepsi, Coca-Cola, Dr. Pepper… and every other soft drink company in the world attempted to merge, creating a monopoly soft drink manufacturer. In what possible employment market would even this merger create a monopsony in which anticompetitive harm could be tied to the merger? In the market for “people who know soft drink secret formulas?” Yet Steinbaum would have the Sherman Act enforced against such a merger not because it might create a product market monopoly, but because the existence of a product market monopoly means the firm must be able to bad things in other markets, as well. For Steinbaum and all the other scolds who see concentration as the source of all evil, the dearth of evidence to support such a claim is no barrier (on which, see, e.g., this recent, content-less NYT article (that, naturally, quotes Steinbaum) on how “big business may be to blame” for the slowing rate of startups).

The point is, monopoly power in a product market does not necessarily have any relationship to monopsony power in the labor market. Simply asserting that it does — and lambasting the enforcement agencies for not just accepting that assertion — is farcical.

The real question, however, is what has happened to the University of Chicago that it continues to provide a platform for such nonsense?

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

On July 1, the minimum wage will spike in several cities and states across the country. Portland, Oregon’s minimum wage will rise by $1.50 to $11.25 an hour. Los Angeles will also hike its minimum wage by $1.50 to $12 an hour. Recent research shows that these hikes will make low wage workers poorer.

A study supported and funded in part by the Seattle city government, was released this week, along with an NBER paper evaluating Seattle’s minimum wage increase to $13 an hour. The papers find that the increase to $13 an hour had significant negative impacts on employment and led to lower incomes for minimum wage workers.

The study is the first study of a very high minimum wage for a city. During the study period, Seattle’s minimum wage increased from what had been the nation’s highest state minimum wage to an even higher level. It is also unique in its use of administrative data that has much more detail than is usually available to economics researchers.

Conclusions from the research focusing on Seattle’s increase to $13 an hour are clear: The policy harms those it was designed to help.

  • A loss of more than 5,000 jobs and a 9 percent reduction in hours worked by those who retained their jobs.
  • Low-wage workers lost an average of $125 per month. The minimum wage has always been a terrible way to reduce poverty. In 2015 and 2016, I presented analysis to the Oregon Legislature indicating that incomes would decline with a steep increase in the minimum wage. The Seattle study provides evidence backing up that forecast.
  • Minimum wage supporters point to research from the 1990s that made headlines with its claims that minimum wage increases had no impact on restaurant employment. The authors of the Seattle study were able to replicate the results of these papers by using their own data and imposing the same limitations that the earlier researchers had faced. The Seattle study shows that those earlier papers’ findings were likely driven by their approach and data limitations. This is a big deal, and a novel research approach that gives strength to the Seattle study’s results.

Some inside baseball.

The Seattle Minimum Wage Study was supported and funded in part by the Seattle city government. It’s rare that policy makers go through any effort to measure the effectiveness of their policies, so Seattle should get some points for transparency.

Or not so transparent: The mayor of Seattle commissioned another study, by an advocacy group at Berkeley whose previous work on the minimum wage is uniformly in favor of hiking the minimum wage (they testified before the Oregon Legislature to cheerlead the state’s minimum wage increase). It should come as no surprise that the Berkeley group released its report several days before the city’s “official” study came out.

You might think to yourself, “OK, that’s Seattle. Seattle is different.”

But, maybe Seattle is not that different. In fact, maybe the negative impacts of high minimum wages are universal, as seen in another study that came out this week, this time from Denmark.

In Denmark the minimum wage jumps up by 40 percent when a worker turns 18. The Danish researchers found that this steep increase was associated with employment dropping by one-third, as seen in the chart below from the paper.

3564_KREINER-Fig1

Let’s look at what’s going to happen in Oregon. The state’s employment department estimates that about 301,000 jobs will be affected by the rate increase. With employment of almost 1.8 million, that means one in six workers will be affected by the steep hikes going into effect on July 1. That’s a big piece of the work force. By way of comparison, in the past when the minimum wage would increase by five or ten cents a year, only about six percent of the workforce was affected.

This is going to disproportionately affect youth employment. As noted in my testimony to the legislature, unemployment for Oregonians age 16 to 19 is 8.5 percentage points higher than the national average. This was not always the case. In the early 1990s, Oregon’s youth had roughly the same rate of unemployment as the U.S. as a whole. Then, as Oregon’s minimum wage rose relative to the federal minimum wage, Oregon’s youth unemployment worsened. Just this week, Multnomah County made a desperate plea for businesses to hire more youth as summer interns.

It has been suggested Oregon youth have traded education for work experience—in essence, they have opted to stay in high school or enroll in higher education instead of entering the workforce. The figure below shows, however, that youth unemployment has increased for both those enrolled in school and those who are not enrolled in school. The figure debunks the notion that education and employment are substitutes. In fact, the large number of students seeking work demonstrates many youth want employment while they further their education.

OregonYouthUnemployment

None of these results should be surprising. Minimum wage research is more than a hundred years old. Aside from the “mans bites dog” research from the 1990s, economists were broadly in agreement that higher minimum wages would be associated with reduced employment, especially among youth. The research published this week is groundbreaking in its data and methodology. At the same time, the results are unsurprising to anyone with any understanding of economics or experience running a business.

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

It’s fitting that FCC Chairman Ajit Pai recently compared his predecessor’s jettisoning of the FCC’s light touch framework for Internet access regulation without hard evidence to the Oklahoma City Thunder’s James Harden trade. That infamous deal broke up a young nucleus of three of the best players in the NBA in 2012 because keeping all three might someday create salary cap concerns. What few saw coming was a new TV deal in 2015 that sent the salary cap soaring.

If it’s hard to predict how the market will evolve in the closed world of professional basketball, predictions about the path of Internet innovation are an order of magnitude harder — especially for those making crucial decisions with a lot of money at stake.

The FCC’s answer for what it considered to be the dangerous unpredictability of Internet innovation was to write itself a blank check of authority to regulate ISPs in the 2015 Open Internet Order (OIO), embodied in what is referred to as the “Internet conduct standard.” This standard expanded the scope of Internet access regulation well beyond the core principle of preserving openness (i.e., ensuring that any legal content can be accessed by all users) by granting the FCC the unbounded, discretionary authority to define and address “new and novel threats to the Internet.”

When asked about what the standard meant (not long after writing it), former Chairman Tom Wheeler replied,

We don’t really know. We don’t know where things will go next. We have created a playing field where there are known rules, and the FCC will sit there as a referee and will throw the flag.

Somehow, former Chairman Wheeler would have us believe that an amorphous standard that means whatever the agency (or its Enforcement Bureau) says it means created a playing field with “known rules.” But claiming such broad authority is hardly the light-touch approach marketed to the public. Instead, this ill-conceived standard allows the FCC to wade as deeply as it chooses into how an ISP organizes its business and how it manages its network traffic.

Such an approach is destined to undermine, rather than further, the objectives of Internet openness, as embodied in Chairman Powell’s 2005 Internet Policy Statement:

To foster creation, adoption and use of Internet broadband content, applications, services and attachments, and to ensure consumers benefit from the innovation that comes from competition.

Instead, the Internet conduct standard is emblematic of how an off-the-rails quest to heavily regulate one specific component of the complex Internet ecosystem results in arbitrary regulatory imbalances — e.g., between ISPs and over-the-top (OTT) or edge providers that offer similar services such as video streaming or voice calling.

As Boston College Law Professor, Dan Lyons, puts it:

While many might assume that, in theory, what’s good for Netflix is good for consumers, the reality is more complex. To protect innovation at the edge of the Internet ecosystem, the Commission’s sweeping rules reduce the opportunity for consumer-friendly innovation elsewhere, namely by facilities-based broadband providers.

This is no recipe for innovation, nor does it coherently distinguish between practices that might impede competition and innovation on the Internet and those that are merely politically disfavored, for any reason or no reason at all.

Free data madness

The Internet conduct standard’s unholy combination of unfettered discretion and the impulse to micromanage can (and will) be deployed without credible justification to the detriment of consumers and innovation. Nowhere has this been more evident than in the confusion surrounding the regulation of “free data.”

Free data, like T-Mobile’s Binge On program, is data consumed by a user that has been subsidized by a mobile operator or a content provider. The vertical arrangements between operators and content providers creating the free data offerings provide many benefits to consumers, including enabling subscribers to consume more data (or, for low-income users, to consume data in the first place), facilitating product differentiation by mobile operators that offer a variety of free data plans (including allowing smaller operators the chance to get a leg up on competitors by assembling a market-share-winning plan), increasing the overall consumption of content, and reducing users’ cost of obtaining information. It’s also fundamentally about experimentation. As the International Center for Law & Economics (ICLE) recently explained:

Offering some services at subsidized or zero prices frees up resources (and, where applicable, data under a user’s data cap) enabling users to experiment with new, less-familiar alternatives. Where a user might not find it worthwhile to spend his marginal dollar on an unfamiliar or less-preferred service, differentiated pricing loosens the user’s budget constraint, and may make him more, not less, likely to use alternative services.

In December 2015 then-Chairman Tom Wheeler used his newfound discretion to launch a 13-month “inquiry” into free data practices before preliminarily finding some to be in violation of the standard. Without identifying any actual harm, Wheeler concluded that free data plans “may raise” economic and public policy issues that “may harm consumers and competition.”

After assuming the reins at the FCC, Chairman Pai swiftly put an end to that nonsense, saying that the Commission had better things to do (like removing barriers to broadband deployment) than denying free data plans that expand Internet access and are immensely popular, especially among low-income Americans.

The global morass of free data regulation

But as long as the Internet conduct standard remains on the books, it implicitly grants the US’s imprimatur to harmful policies and regulatory capriciousness in other countries that look to the US for persuasive authority. While Chairman Pai’s decisive intervention resolved the free data debate in the US (at least for now), other countries are still grappling with whether to prohibit the practice, allow it, or allow it with various restrictions.

In Europe, the 2016 EC guidelines left the decision of whether to allow the practice in the hands of national regulators. Consequently, some regulators — in Hungary, Sweden, and the Netherlands (although there the ban was recently overturned in court) — have banned free data practices  while others — in Denmark, Germany, Spain, Poland, the United Kingdom, and Ukraine — have not. And whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs, a state of affairs that is compounded by a lack of data on the consequences of various approaches to their regulation.

In Canada this year, the CRTC issued a decision adopting restrictive criteria under which to evaluate free data plans. The criteria include assessing the degree to which the treatment of data is agnostic, whether the free data offer is exclusive to certain customers or certain content providers, the impact on Internet openness and innovation, and whether there is financial compensation involved. The standard is open-ended, and free data plans as they are offered in the US would “likely raise concerns.”

Other regulators are contributing to the confusion through ambiguously framed rules, such as that of the Chilean regulator, Subtel. In a 2014 decision, it found that a free data offer of specific social network apps was in breach of Chile’s Internet rules. In contrast to what is commonly reported, however, Subtel did not ban free data. Instead, it required mobile operators to change how they promote such services, requiring them to state that access to Facebook, Twitter and WhatsApp were offered “without discounting the user’s balance” instead of “at no cost.” It also required them to disclose the amount of time the offer would be available, but imposed no mandatory limit.

In addition to this confusing regulatory make-work governing how operators market free data plans, the Chilean measures also require that mobile operators offer free data to subscribers who pay for a data plan, in order to ensure free data isn’t the only option users have to access the Internet.

The result is that in Chile today free data plans are widely offered by Movistar, Claro, and Entel and include access to apps such as Facebook, WhatsApp, Twitter, Instagram, Pokemon Go, Waze, Snapchat, Apple Music, Spotify, Netflix or YouTube — even though Subtel has nominally declared such plans to be in violation of Chile’s net neutrality rules.

Other regulators are searching for palatable alternatives to both flex their regulatory muscle to govern Internet access, while simultaneously making free data work. The Indian regulator, TRAI, famously banned free data in February 2016. But the story doesn’t end there. After seeing the potential value of free data in unserved and underserved, low-income areas, TRAI proposed implementing government-sanctioned free data. The proposed scheme would provide rural subscribers with 100 MB of free data per month, funded through the country’s universal service fund. To ensure that there would be no vertical agreements between content providers and mobile operators, TRAI recommended introducing third parties, referred to as “aggregators,” that would facilitate mobile-operator-agnostic arrangements.

The result is a nonsensical, if vaguely well-intentioned, threading of the needle between the perceived need to (over-)regulate access providers and the determination to expand access. Notwithstanding the Indian government’s awareness that free data will help to close the digital divide and enhance Internet access, in other words, it nonetheless banned private markets from employing private capital to achieve that very result, preferring instead non-market processes which are unlikely to be nearly as nimble or as effective — and yet still ultimately offer “non-neutral” options for consumers.

Thinking globally, acting locally (by ditching the Internet conduct standard)

Where it is permitted, free data is undergoing explosive adoption among mobile operators. Currently in the US, for example, all major mobile operators offer some form of free data or unlimited plan to subscribers. And, as a result, free data is proving itself as a business model for users’ early stage experimentation and adoption of augmented reality, virtual reality and other cutting-edge technologies that represent the Internet’s next wave — but that also use vast amounts of data. Were the US to cut off free data at the legs under the OIO absent hard evidence of harm, it would substantially undermine this innovation.

The application of the nebulous Internet conduct standard to free data is a microcosm of the current incoherence: It is a rule rife with a parade of uncertainties and only theoretical problems, needlessly saddling companies with enforcement risk, all in the name of preserving and promoting innovation and openness. As even some of the staunchest proponents of net neutrality have recognized, only companies that can afford years of litigation can be expected to thrive in such an environment.

In the face of confusion and uncertainty globally, the US is now poised to provide leadership grounded in sound policy that promotes innovation. As ICLE noted last month, Chairman Pai took a crucial step toward re-imposing economic rigor and the rule of law at the FCC by questioning the unprecedented and ill-supported expansion of FCC authority that undergirds the OIO in general and the Internet conduct standard in particular. Today the agency will take the next step by voting on Chairman Pai’s proposed rulemaking. Wherever the new proceeding leads, it’s a welcome opportunity to analyze the issues with a degree of rigor that has thus far been appallingly absent.

And we should not forget that there’s a direct solution to these ambiguities that would avoid the undulations of subsequent FCC policy fights: Congress could (and should) pass legislation implementing a regulatory framework grounded in sound economics and empirical evidence that allows for consumers to benefit from the vast number of procompetitive vertical agreements (such as free data plans), while still facilitating a means for policing conduct that may actually harm consumers.

The Golden State Warriors are the heavy odds-on favorite to win another NBA Championship this summer, led by former OKC player Kevin Durant. And James Harden is a contender for league MVP. We can’t always turn back the clock on a terrible decision, hastily made before enough evidence has been gathered, but Chairman Pai’s efforts present a rare opportunity to do so.

Today the International Center for Law & Economics (ICLE) Antitrust and Consumer Protection Research Program released a new white paper by Geoffrey A. Manne and Allen Gibby entitled:

A Brief Assessment of the Procompetitive Effects of Organizational Restructuring in the Ag-Biotech Industry

Over the past two decades, rapid technological innovation has transformed the industrial organization of the ag-biotech industry. These developments have contributed to an impressive increase in crop yields, a dramatic reduction in chemical pesticide use, and a substantial increase in farm profitability.

One of the most striking characteristics of this organizational shift has been a steady increase in consolidation. The recent announcements of mergers between Dow and DuPont, ChemChina and Syngenta, and Bayer and Monsanto suggest that these trends are continuing in response to new market conditions and a marked uptick in scientific and technological advances.

Regulators and industry watchers are often concerned that increased consolidation will lead to reduced innovation, and a greater incentive and ability for the largest firms to foreclose competition and raise prices. But ICLE’s examination of the underlying competitive dynamics in the ag-biotech industry suggests that such concerns are likely unfounded.

In fact, R&D spending within the seeds and traits industry increased nearly 773% between 1995 and 2015 (from roughly $507 million to $4.4 billion), while the combined market share of the six largest companies in the segment increased by more than 550% (from about 10% to over 65%) during the same period.

Firms today are consolidating in order to innovate and remain competitive in an industry replete with new entrants and rapidly evolving technological and scientific developments.

According to ICLE’s analysis, critics have unduly focused on the potential harms from increased integration, without properly accounting for the potential procompetitive effects. Our brief white paper highlights these benefits and suggests that a more nuanced and restrained approach to enforcement is warranted.

Our analysis suggests that, as in past periods of consolidation, the industry is well positioned to see an increase in innovation as these new firms unite complementary expertise to pursue more efficient and effective research and development. They should also be better able to help finance, integrate, and coordinate development of the latest scientific and technological developments — particularly in rapidly growing, data-driven “digital farming” —  throughout the industry.

Download the paper here.

And for more on the topic, revisit TOTM’s recent blog symposium, “Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries,” here.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

In a recent long-form article in the New York Times, reporter Noam Scheiber set out to detail some of the ways Uber (and similar companies, but mainly Uber) are engaged in “an extraordinary experiment in behavioral science to subtly entice an independent work force to maximize its growth.”

That characterization seems innocuous enough, but it is apparent early on that Scheiber’s aim is not only to inform but also, if not primarily, to deride these efforts. The title of the piece, in fact, sets the tone:

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Uber and its relationship with its drivers are variously described by Scheiber in the piece as secretive, coercive, manipulative, dominating, and exploitative, among other things. As Schreiber describes his article, it sets out to reveal how

even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

What’s so galling about the piece is that, if you strip away the biased and frequently misguided framing, it presents a truly engaging picture of some of the ways that Uber sets about solving a massively complex optimization problem, abetted by significant agency costs.

So I did. Strip away the detritus, add essential (but omitted) context, and edit the article to fix the anti-Uber bias, the one-sided presentation, the mischaracterizations, and the fundamentally non-economic presentation of what is, at its core, a fascinating illustration of some basic problems (and solutions) from industrial organization economics. (For what it’s worth, Scheiber should know better. After all, “He holds a master’s degree in economics from the University of Oxford, where he was a Rhodes Scholar, and undergraduate degrees in math and economics from Tulane University.”)

In my retelling, the title becomes:

How Uber Uses Innovative Management Tactics to Incentivize Its Drivers

My transformed version of the piece, with critical commentary in the form of tracked changes to the original, is here (pdf).

It’s a long (and, as I said, fundamentally interesting) piece, with cool interactive graphics, well worth the read (well, at least in my retelling, IMHO). Below is just a taste of the edits and commentary I added.

For example, where Scheiber writes:

Uber exists in a kind of legal and ethical purgatory, however. Because its drivers are independent contractors, they lack most of the protections associated with employment. By mastering their workers’ mental circuitry, Uber and the like may be taking the economy back toward a pre-New Deal era when businesses had enormous power over workers and few checks on their ability to exploit it.

With my commentary (here integrated into final form rather than tracked), that paragraph becomes:

Uber operates under a different set of legal constraints, however, also duly enacted and under which millions of workers have profitably worked for decades. Because its drivers are independent contractors, they receive their compensation largely in dollars rather than government-mandated “benefits” that remove some of the voluntariness from employer/worker relationships. And, in the case of overtime pay, for example, the Uber business model that is built in part on offering flexible incentives to match supply and demand using prices and compensation, would be next to impossible. It is precisely through appealing to drivers’ self-interest that Uber and the like may be moving the economy forward to a new era when businesses and workers have more flexibility, much to the benefit of all.

Elsewhere, Scheiber’s bias is a bit more subtle, but no less real. Thus, he writes:

As he tried to log off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.

With my edits and commentary, that paragraph becomes:

As he started the process of logging off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted, but the former was listed first. It’s anyone’s guess whether either characteristic — placement or coloring — had any effect on drivers’ likelihood of clicking one button or the other.

And one last example. Scheiber writes:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, there is another way to think of the logic of forward dispatch: It overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

This pre-emptive hard-wiring can have a huge influence on behavior, said David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably, as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be.

Here’s how I would recast that, and add some much-needed economics:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies — by giving them more income-earning opportunities.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, and seems like another win-win, some critics have tried to paint even this means of satisfying both driver and consumer preferences in a negative light by claiming that the forward dispatch algorithm overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

Tweaks like these put paid to the arguments that Uber is simply trying to abuse its drivers. And yet, critics continue to make such claims:

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

It’s difficult to take seriously claims that Uber “abuses” drivers by setting a default that drivers almost certainly prefer; surely drivers seek out another fare following the last fare more often than they seek out another bathroom break. In any case, the difference between one default and the other is a small change in the number of times drivers might have to push a single button; hardly a huge impediment.

But such claims persist, nevertheless. Setting a trivially different default can have a huge influence on behavior, claims David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably — and to change the subject — as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be. But there are any number of defenses of this practice, from both a driver- and consumer-welfare standpoint. Not least, such disclosure could well create isolated scarcity for a huge range of individual ride requests (as opposed to the general scarcity during a “surge”), leading to longer wait times, the need to adjust prices for consumers on the basis of individual rides, and more intense competition among drivers for the most profitable rides. Given these and other explanations, it is extremely unlikely that the practice is actually aimed at “abusing” drivers.

As they say, read the whole thing!