Archives For federal trade commission

I urge Truth on the Market readers to signal their preferences and help select the 2016 antitrust writing awards bestowed by the prestigious competition law and policy journal, Concurrences.  (See here for the 2015 winners.)

Readers and a Steering Committee vote for their favorite articles among those nominated, which results in a short list of finalists (two per category).  The Concurrences Board then votes for the award-winning articles from the shortlist.  (See here for detailed rules.)

Readers can now vote online until February 15 for their favorite articles at http://awards.concurrences.com/.

Among the nominees are three excellent papers written by former FTC Commissioner Joshua D. Wright (including one written with Judge Douglas H. Ginsburg) and one paper co-authored by Professor Thom Lambert and me (the four articles fall into three separate categories so you can vote for at least three of them):

  1. Academic Article IP Category: Douglas H. Ginsburg, Koren W. Wong-Ervin, and Joshua D. Wright, Product Hopping and the Limits of Antitrust: The Danger of Micromanaging Innovation, http://awards.concurrences.com/articles-awards/academic-articles-awards/article/product-hopping-and-the-limits-of-antitrust-the-danger-of-micromanaging.
  2. Academic Article General Antitrust Category: Joshua D. Wright & Angela Diveley, Unfair Methods of Competition after the 2015 Commission Statement, http://awards.concurrences.com/articles-awards/academic-articles-awards/article/unfair-methods-of-competition-after-the-2015-commission-statement.
  3. Academic Article Unilateral Conduct Category: Derek Moore & Joshua D. Wright, Conditional Discounts and the Law of Exclusive Dealing, http://awards.concurrences.com/articles-awards/academic-articles-awards/article/conditional-discounts-and-the-law-of-exclusive-dealing.
  4. Academic Article General Antitrust Category: Thomas A. Lambert and Alden F. Abbott, Recognizing the Limits of Antitrust:  The Roberts Court Versus the Enforcement Agencies, http://jcle.oxfordjournals.org/content/early/2015/09/14/joclec.nhv020.abstract and  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2596660 (downloadable version).

All four of these articles break new ground in important areas of antitrust law and policy.

(Full disclosure:  Wright and Ginsburg are professors at George Mason Law School. I am on the adjunct faculty at that fine institution and Wong-Ervin is Director of George Mason Law School’s Global Antitrust Institute.)

A number of blockbuster mergers have received (often negative) attention from media and competition authorities in recent months. From the recently challenged Staples-Office Depot merger to the abandoned Comcast-Time Warner merger to the heavily scrutinized Aetna-Humana merger (among many others), there has been a wave of potential mega-mergers throughout the economy—many of them met with regulatory resistance. We’ve discussed several of these mergers at TOTM (see, e.g., here, here, here and here).

Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.

All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.

But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.

The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.

The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.

The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.

The bottom line

The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.

All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:

  1. Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
  2. Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
  3. While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
  4. The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
  5. The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
  6. For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
  7. Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.

Distinguishing Ardagh: The interchangeability of aluminum and glass

An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.

As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.

Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.

The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.

For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.

For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.

In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.

Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.

A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.

Transportation costs are key

Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.

But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.

The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.  

“There’s a great future in plastics”

All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.

Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).

And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that

[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist

is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:

But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.

True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.

The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.

A note on efficiencies

As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.

In his Ardagh dissent, Commissioner Wright noted that:

Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case.  My own analysis of cognizable efficiencies in this matter indicates they are significant.   In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.

The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.

Read our complete assessment of the merger’s effect here.

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

Thanks to the Truth on the Market bloggers for having me. I’m a long-time fan of the blog, and excited to be contributing.

The Third Circuit will soon review the appeal of generic drug manufacturer, Mylan Pharmaceuticals, in the latest case involving “product hopping” in the pharmaceutical industry — Mylan Pharmaceuticals v. Warner Chilcott.

Product hopping occurs when brand pharmaceutical companies shift their marketing efforts from an older version of a drug to a new, substitute drug in order to stave off competition from cheaper generics. This business strategy is the predictable business response to the incentives created by the arduous FDA approval process, patent law, and state automatic substitution laws. It costs brand companies an average of $2.6 billion to bring a new drug to market, but only 20 percent of marketed brand drugs ever earn enough to recoup these costs. Moreover, once their patent exclusivity period is over, brand companies face the likely loss of 80-90 percent of their sales to generic versions of the drug under state substitution laws that allow or require pharmacists to automatically substitute a generic-equivalent drug when a patient presents a prescription for a brand drug. Because generics are automatically substituted for brand prescriptions, generic companies typically spend very little on advertising, instead choosing to free ride on the marketing efforts of brand companies. Rather than hand over a large chunk of their sales to generic competitors, brand companies often decide to shift their marketing efforts from an existing drug to a new drug with no generic substitutes.

Generic company Mylan is appealing U.S. District Judge Paul S. Diamond’s April decision to grant defendant and brand company Warner Chilcott’s summary judgment motion. Mylan and other generic manufacturers contend that Defendants engaged in a strategy to impede generic competition for branded Doryx (an acne medication) by executing several product redesigns and ceasing promotion of prior formulations. Although the plaintiffs generally changed their products to keep up with the brand-drug redesigns, they contend that these redesigns were intended to circumvent automatic substitution laws, at least for the periods of time before the generic companies could introduce a substitute to new brand drug formulations. The plaintiffs argue that product redesigns that prevent generic manufacturers from benefitting from automatic substitution laws violate Section 2 of the Sherman Act.

Product redesign is not per se anticompetitive. Retiring an older branded version of a drug does not block generics from competing; they are still able to launch and market their own products. Product redesign only makes competition tougher because generics can no longer free ride on automatic substitution laws; instead they must either engage in their own marketing efforts or redesign their product to match the brand drug’s changes. Moreover, product redesign does not affect a primary source of generics’ customers—beneficiaries that are channeled to cheaper generic drugs by drug plans and pharmacy benefit managers.

The Supreme Court has repeatedly concluded that “the antitrust laws…were enacted for the protection of competition not competitors” and that even monopolists have no duty to help a competitor. The district court in Mylan generally agreed with this reasoning, concluding that the brand company Defendants did not exclude Mylan and other generics from competition: “Throughout this period, doctors remained free to prescribe generic Doryx; pharmacists remained free to substitute generics when medically appropriate; and patients remained free to ask their doctors and pharmacists for generic versions of the drug.” Instead, the court argued that Mylan was a “victim of its own business strategy”—a strategy that relied on free-riding off brand companies’ marketing efforts rather than spending any of their own money on marketing. The court reasoned that automatic substitution laws provide a regulatory “bonus” and denying Mylan the opportunity to take advantage of that bonus is not anticompetitive.

Product redesign should only give rise to anticompetitive claims if combined with some other wrongful conduct, or if the new product is clearly a “sham” innovation. Indeed, Senior Judge Douglas Ginsburg and then-FTC Commissioner Joshua D. Wright recently came out against imposing competition law sanctions on product redesigns that are not sham innovations. If lawmakers are concerned that product redesigns will reduce generic usage and the cost savings they create, they could follow the lead of several states that have broadened automatic substitution laws to allow the substitution of generics that are therapeutically-equivalent but not identical in other ways, such as dosage form or drug strength.

Mylan is now asking the Third Circuit to reexamine the case. If the Third Circuit reverses the lower courts decision, it would imply that brand drug companies have a duty to continue selling superseded drugs in order to allow generic competitors to take advantage of automatic substitution laws. If the Third Circuit upholds the district court’s ruling on summary judgment, it will likely create a circuit split between the Second and Third Circuits. In July 2015, the Second Circuit court upheld an injunction in NY v. Actavis that required a brand company to continue manufacturing and selling an obsolete drug until after generic competitors had an opportunity to launch their generic versions and capture a significant portion of the market through automatic substitution laws. I’ve previously written about the duty created in this case.

Regardless of whether the Third Circuit’s decision causes a split, the Supreme Court should take up the issue of product redesign in pharmaceuticals to provide guidance to brand manufacturers that currently operate in a world of uncertainty and under the constant threat of litigation for decisions they make when introducing new products.

On October 7, 2015, the Senate Judiciary Committee held a hearing on the “Standard Merger and Acquisition Reviews Through Equal Rules” (SMARTER) Act of 2015.  As former Antitrust Modernization Commission Chair (and former Acting Assistant Attorney General for Antitrust) Deborah Garza explained in her testimony, “t]he premise of the SMARTER Act is simple:  A merger should not be treated differently depending on which antitrust enforcement agency – DOJ or the FTC – happens to review it.  Regulatory outcomes should not be determined by a flip of the merger agency coin.”

Ms. Garza is clearly correct.  Both the U.S. Justice Department (DOJ) and the U.S. Federal Trade Commission (FTC) enforce the federal antitrust merger review provision, Section 7 of the Clayton Act, and employ a common set of substantive guidelines (last revised in 2010) to evaluate merger proposals.  Neutral “rule of law” principles indicate that private parties should expect to have their proposed mergers subject to the same methods of assessment and an identical standard of judicial review, regardless of which agency reviews a particular transaction.  (The two agencies decide by mutual agreement which agency will review any given merger proposal.)

Unfortunately, however, that is not the case today.  The FTC’s independent ability to challenge mergers administratively, combined with the difference in statutory injunctive standards that apply to FTC and DOJ merger reviews, mean that a particular merger application may face more formidable hurdles if reviewed by the FTC, rather than DOJ.  These two differences commendably would be eliminated by the SMARTER Act, which would subject the FTC to current DOJ standards.  The SMARTER Act would not deal with a third difference – the fact that DOJ merger consent decrees, but not FTC merger consent decrees, must be filed with a federal court for “public interest” review.  This commentary briefly addresses those three issues.  The first and second ones present significant “rule of law” problems, in that they involve differences in statutory language applied to the same conduct.  The third issue, the question of judicial review of settlements, is of a different nature, but nevertheless raises substantial policy concerns.

  1. FTC Administrative Authority

The first rule of law problem stems from the broader statutory authority the FTC possesses to challenge mergers.  In merger cases, while DOJ typically consolidates actions for a preliminary and permanent injunction in district court, the FTC merely seeks a preliminary injunction (which is easier to obtain than a permanent injunction) and “holds in its back pocket” the ability to challenge a merger in an FTC administrative proceeding – a power DOJ does not possess.  In short, the FTC subjects proposed mergers to a different and more onerous method of assessment than DOJ.  In Ms. Garza’s words (footnotes deleted):

“Despite the FTC’s legal ability to seek permanent relief from the district court, it prefers to seek a preliminary injunction only, to preserve the status quo while it proceeds with its administrative litigation.

This approach has great strategic significance. First, the standard for obtaining a preliminary injunction in government merger challenges is lower than the standard for obtaining a permanent injunction. That is, it is easier to get a preliminary injunction.

Second, as a practical matter, the grant of a preliminary injunction is typically sufficient to end the matter. In nearly every case, the parties will abandon their transaction rather than incur the heavy cost and uncertainty of trying to hold the merger together through further proceedings—which is why merging parties typically seek to consolidate proceedings for preliminary and permanent relief under Rule 65(a)(2). Time is of the essence. As one witness testified before the [Antitrust Modernization Commission], “it is a rare seller whose business can withstand the destabilizing effect of a year or more of uncertainty” after the issuance of a preliminary injunction.

Third, even if the court denies the FTC its preliminary injunction and the parties close their merger, the FTC can still continue to pursue an administrative challenge with an eye to undoing or restructuring the transaction. This is the “heads I win, tails you lose” aspect of the situation today. It is very difficult for the parties to get to the point of a full hearing in court given the effect of time on transactions, even with the FTC’s expedited administrative procedures adopted in about 2008. . . . 

[Moreover,] [while] [u]nder its new procedures, parties can move to dismiss an administrative proceeding if the FTC has lost a motion for preliminary injunction and the FTC will consider whether to proceed on a case-by-case basis[,] . . . th[is] [FTC] policy could just as easily change again, unless Congress speaks.”

Typically time is of the essence in proposed mergers, so substantial delays occasioned by extended reviews of those transactions may prevent many transactions from being consummated, even if they eventually would have passed antitrust muster.  Ms. Garza’s testimony, plus testimony by former Assistant Deputy Assistant Attorney General for Antitrust Abbott (Tad) Lipsky, document cases of substantial delay in FTC administrative reviews of merger proposals.  (As Mr. Lipsky explained, “[a]ntitrust practitioners have long perceived that the possibility of continued administrative litigation by the FTC following a court decision constitutes a significant disincentive for parties to invest resources in transaction planning and execution.”)  Congress should weigh these delay-specific costs, as well as the direct costs of any additional burdens occasioned by FTC administrative procedures, in deciding whether to require the FTC (like DOJ) to rely solely on federal court proceedings.

  1. Differences Between FTC and DOJ Injunctive Standards

The second rule of law problem arises from the lighter burden the FTC must satisfy to obtain injunctive relief in federal court.  Under Section 13(b) of the FTC Act, an injunction shall be granted the FTC “[u]pon a proper showing that, weighing the equities and considering the Commission’s likelihood of success, such action would be in the public interest.”  The D.C. Circuit (in FTC v. H.J. Heinz Co. and in FTC v. Whole Foods Market, Inc.) has stated that, to meet this burden, the FTC need merely have raised questions “so serious, substantial, difficult and doubtful as to make them fair ground for further investigation.”  By contrast, as Ms. Garza’s testimony points out, “under Section 15 of the Clayton Act, courts generally apply a traditional equities test requiring DOJ to show a reasonable likelihood of success on the merits—not merely that there is ‘fair ground for further investigation.’”  In a similar vein, Mr. Lipsky’s testimony stated that “[t]he cumulative effect of several recent contested merger decisions has been to allow the FTC to argue that it needn’t show likelihood of success in order to win a preliminary injunction; specifically these decisions suggest that the Commission need only show ‘serious, substantial, difficult and doubtful’ questions regarding the merits.”  Although some commentators have contended that, in reality, the two standards generally will be interpreted in a similar fashion (“whatever theoretical difference might exist between the FTC and DOJ standards has no practical significance”), there is no doubt that the language of the two standards is different – and basic principles of statutory construction indicate that differences in statutory language should be given meaning and not ignored.  Accordingly, merging parties face the real prospect that they might fare worse under federal court review of an FTC challenge to their merger proposal than they would have fared had DOJ challenged the same transaction.  Such an outcome, even if it is rare, would be at odds with neutral application of the rule of law.

  1. The Tunney Act

Finally, helpful as it is, the SMARTER Act does not entirely eliminate the disparate treatment of proposed mergers by DOJ and the FTC.  The Tunney Act, 15 U.S.C. § 16, enacted in 1974, which applies to DOJ but not to the FTC, requires that DOJ submit all proposed consent judgments under the antitrust laws (including Section 7 of the Clayton Act) to a federal district court for 60 days of public comment prior to being entered.

a.  Economic Costs (and Potential Benefits) of the Tunney Act

The Tunney Act potentially interjects uncertainty into the nature of the “deal” struck between merging parties and DOJ in merger cases.  It does this by subjecting proposed DOJ merger settlements (and other DOJ non-merger civil antitrust settlements) to a 60 day public review period, requiring federal judges to determine whether a proposed settlement is “in the public interest” before entering it, and instructing the court to consider the impact of the entry of judgment “upon competition and upon the public generally.”  Leading antitrust practitioners have noted that this uncertainty “could affect shareholders, customers, or even employees. Moreover, the merged company must devote some measure of resources to dealing with the Tunney Act review—resources that instead could be devoted to further integration of the two companies or generation of any planned efficiencies or synergies.”  More specifically:

“[W]hile Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power. Such a distortion in conduct probably was not contemplated by the Tunney Act’s drafters, but merger partners will need to be cognizant of how their post-close actions may be perceived during Tunney Act review. . . .  [And, in addition,] while Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power.”

Although the Tunney Act has been justified on traditional “public interest” grounds, even its scholarly supporters (a DOJ antitrust attorney), in praising its purported benefits, have acknowledged its potential for abuse:

“Properly interpreted and applied, the Tunney Act serves a number of related, useful functions. The disclosure provisions and judicial approval requirement for decrees can help identify, and more importantly deter, “influence peddling” and other abuses. The notice-and-comment procedures force the DOJ to explain its rationale for the settlement and provide its answers to objections, thus providing transparency. They also provide a mechanism for third-party input, and, thus, a way to identify and correct potentially unnoticed problems in a decree. Finally, the court’s public interest review not only helps ensure that the decree benefits the public, it also allows the court to protect itself against ambiguous provisions and enforcement problems and against an objectionable or pointless employment of judicial power. Improperly applied, the Tunney Act does more harm than good. When a district court takes it upon itself to investigate allegations not contained in a complaint, or attempts to “re-settle” a case to provide what it views as stronger, better relief, or permits lengthy, unfocused proceedings, the Act is turned from a useful check to an unpredictable, costly burden.”

The justifications presented by the author are open to serious question.  Whether “influence peddling” can be detected merely from the filing of proposed decree terms is doubtful – corrupt deals to settle a matter presumably would be done “behind the scenes” in a manner not available to public scrutiny.  The economic expertise and detailed factual knowledge that informs a DOJ merger settlement cannot be fully absorbed by a judge (who may fall prey to his or her personal predilections as to what constitutes good policy) during a brief review period.  “Transparency” that facilitates “third-party input” can too easily be manipulated by rent-seeking competitors who will “trump up” justifications for blocking an efficient merger.  Moreover, third parties who are opposed to mergers in general may also be expected to file objections to efficient arrangements.  In short, the “sunshine” justification for Tunney Act filings is more likely to cloud the evaluation of DOJ policy calls than to provide clarity.

b.  Constitutional Issues Raised by the Tunney Act

In addition to potential economic inefficiencies, the judicial review feature of the Tunney Act raises serious separation of powers issues, as emphasized by the DOJ Office of Legal Counsel (OLC, which advises the Attorney General and the President on questions of constitutional interpretation) in a 1989 opinion regarding qui tam provisions of the False Claims Act:

“There are very serious doubts as to the constitutionality . . . of the Tunney Act:  it intrudes into the Executive power and requires the courts to decide upon the public interest – that is, to exercise a policy discretion normally reserved to the political branches.  Three Justices of the Supreme Court questioned the constitutionality of the Tunney Act in Maryland v. United States, 460 U.S. 1001 (1983) (Rehnquist, J., joined by Burger, C.J., and White, J., dissenting).”

Notably, this DOJ critique of the Tunney Act was written before the 2004 amendments to that statute that specifically empower courts to consider the impact of proposed settlements “upon competition and upon the public generally” – language that significantly trenches upon Executive Branch prerogatives.  Admittedly, the Tunney Act has withstood judicial scrutiny – no court has ruled it unconstitutional.   Moreover, a federal judge can only accept or reject a Tunney Act settlement, not rewrite it, somewhat ameliorating its affront to the separation of powers.  In short, even though it may not be subject to serious constitutional challenge in the courts, the Tunney Act is problematic as a matter of sound constitutional policy.

c.  Congressional Reexamination of the Tunney Act

These economic and constitutional policy concerns suggest that Congress may wish to carefully reexamine the merits of the Tunney Act.  Any such reexamination, however, should be independent of, and not delay expedited consideration of, the SMARTER Act.  The Tunney Act, although of undoubted significance, is only a tangential aspect of the divergent legal standards that apply to FTC and DOJ merger reviews.  It is beyond the scope of current legislative proposals but it merits being taken up at an appropriate time – perhaps in the next Congress.  When Congress turns to the Tunney Act, it may wish to consider four options:  (1) repealing the Act in its entirety; (2) retaining the Act as is; (3) partially repealing it only with respect to merger reviews; or, (4) applying it in full force to the FTC.  A detailed evaluation of those options is beyond the scope of this commentary.

Conclusion

In sum, in order to eliminate inconsistencies between FTC and DOJ standards for reviewing proposed mergers, Congress should give serious consideration to enacting the SMARTER Act, which would both eliminate FTC administrative review of merger proposals and subject the FTC to the same injunctive standard as the DOJ in judicial review of those proposals.  Moreover, if the SMARTER Act is enacted, Congress should also consider going further and amending the Tunney Act to make it apply to FTC as well as to DOJ merger settlements – or, alternatively, to have it not apply at all to any merger settlements (a result which would better respect the constitutional separation of powers and reduce a potential source of economic inefficiency).

Applying antitrust law to combat “hold-up” attempts (involving demands for “anticompetitively excessive” royalties) or injunctive actions brought by standard essential patent (SEP) owners is inherently problematic, as explained by multiple scholars (see here and here, for example).  Disputes regarding compensation to SEP holders are better handled in patent infringement and breach of contract lawsuits, and adding antitrust to the mix imposes unnecessary costs and may undermine involvement in standard setting and harm innovation.  What’s more, as FTC Commissioner Maureen Ohlhausen and former FTC Commissioner Joshua Wright have pointed out (citing research), empirical evidence suggests there is no systematic problem with hold-up.  Indeed, to the contrary, a recent empirical study by Professors from Stanford, Berkeley, and the University of the Andes, accepted for publication in the Journal of Competition Law and Economics, finds that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy – a result totally at odds with theories of SEP-related competitive harm.  Thus, application of a cost-benefit approach that seeks to maximize the welfare benefits of antitrust enforcement strongly militates against continuing to pursue “SEP abuse” cases.  Enforcers should instead focus on more traditional investigations that seek to ferret out conduct that is far more likely to be welfare-inimical, if they are truly concerned about maximizing consumer welfare.

But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention?  The most recent public reports are not encouraging.

In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.”  (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents.  They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.)  She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights.  In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount.  She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price.  But these two examples do not support Chairwoman Ramirez’s point – quite the contrary.  The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders.  Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining.  Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.

A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing.  Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez.  He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court.  While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”.  These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy.  (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.)  In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.

Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems.  (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.)  Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

On August 24, the Third Circuit issued its much anticipated decision in FTC v. Wyndham Worldwide Corp., holding that the U.S. Federal Trade Commission (FTC) has authority to challenge cybersecurity practices under its statutory “unfairness” authority.  This case brings into focus both legal questions regarding the scope of the FTC’s cybersecurity authority and policy questions regarding the manner in which that authority should be exercised.

1.     Wyndham: An Overview

Rather than “reinventing the wheel,” let me begin by quoting at length from Gus Hurwitz’s excellent summary of the relevant considerations in this case:

In 2012, the FTC sued Wyndham Worldwide, the parent company and franchisor of the Wyndham brand of hotels, arguing that its allegedly lax data security practices allowed hackers to repeatedly break into its franchiseescomputer systems. The FTC argued that these breaches resulted in harm to consumers totaling over $10 million in fraudulent activity. The FTC brought its case under Section 5 of the FTC Act, which declares “unfair and deceptive acts and practices” to be illegal. The FTCs basic arguments are that it was, first, deceptive for Wyndham – which had a privacy policy indicating how it handled customer data – to assure consumers that the company took industry-standard security measures to protect customer data; and second, independent of any affirmative assurances that customer data was safe, it was unfair for Wyndham to handle customer data in an insecure way.

This case arose in the broader context of the FTCs efforts to establish a general law of data security. Over the past two decades, the FTC has begun aggressively pursuing data security claims against companies that suffer data breaches. Almost all of these cases have settled out of court, subject to consent agreements with the FTC. The Commission points to these agreements, along with other public documents that it views as guidance, as creating a “common law of data security.” Responding to a request from the Third Circuit for supplemental briefing on this question, the FTC asserted in no uncertain terms its view that “the FTC has acted under its procedures to establish that unreasonable data security practices that harm consumers are indeed unfair within the meaning of Section 5.”

Shortly after the FTCs case was filed, Wyndham asked the District Court judge to dismiss the case, arguing that the FTC didnt have authority under Section 5 to take action against a firm that had suffered a criminal theft of its data. The judge denied this motion. But, recognizing the importance and uncertainty of part of the issue – the scope of the FTCs “unfairness” authority – she allowed Wyndham to immediately appeal that part of her decision. The Third Circuit agreed to hear the appeal, framing the question as whether the FTC has authority to regulate cybersecurity under its Section 5 “unfairness” authority, and, if so, whether the FTCs application of that authority satisfied Constitutional Due Process requirements. Oral arguments were heard last March, and the courts opinion was issued on Monday [August 24]. . . . 

In its opinion, the Court of Appeals rejects Wyndhams arguments that its data security practices cannot be unfair. As such, the case will be allowed to proceed to determine whether Wyndhams security practices were in fact “unfair” under Section 5. . . .

 Recall the setting in which this case arose: the FTC has spent more than a decade trying to create a general law of data security. The reason this case was – and still is – important is because Wyndham was challenging the FTCs general law of data security.

But the court, in the second part of its opinion, accepts Wyndhams arguments that the FTC has not developed such a law. This is central to the courts opinion, because different standards apply to interpretations of laws that courts have developed as opposed to those that agencies have developed. The court outlines these standards, explaining that “a higher standard of fair notice applies [in the context of agency rules] than in the typical civil statutory interpretation case because agencies engage in interpretation differently than courts.”

The court goes on to find that Wyndham had sufficient notice of the requirements of Section 5 under the standard that applies to judicial interpretations of statutes. And it expressly notes that, should the district court decide that the higher standard applies – that is, if the court agrees to apply the general law of data security that the FTC has tried to develop in recent years – the court will need to reevaluate whether the FTCs rules meet Constitutional muster. That review would be subject to the tougher standard applied to agency interpretations of statutes.

Stressing the Third Circuit’s statement that the FTC had failed to explain how it had “informed the public that it needs to look at [FTC] complaints and consent decrees for guidance[,]” Gus concludes that the Third Circuit’s opinion indicates that  the FTC “has lost its war to create a general law of data security” based merely on its prior actions.  According to Gus:

The takeaway, it seems, is that the FTC does have the power to take action against bad security practices, but if it wants to do so in a way that shapes industry norms and legal standards – if it wants to develop a general law of data security – a patchwork of consent decrees and informal statements is insufficient to the task. Rather, it must either pursue its cases to a decision on the merits or develop legally binding rules through . . . rulemaking procedures.

2.     Wyndham’s Implications for the Scope of the FTC’s Legal Authority

I highly respect Gus’s trenchant legal and policy analysis of Wyndham.  I believe, however, that it may somewhat understate the strength of the FTC’s legal position going forward.  The Third Circuit also explained (citations omitted):

Wyndham is only entitled to notice of the meaning of the statute and not to the agencys interpretation of the statute. . . . 

[Furthermore,] Wyndham is entitled to a relatively low level of statutory notice for several reasons. Subsection 45(a) [of the FTC Act, which states “unfair acts or practices” are illegal] does not implicate any constitutional rights here. . . .  It is a civil rather than criminal statute. . . .  And statutes regulating economic activity receive a “less strict” test because their “subject matter is often more narrow, and because businesses, which face economic demands to plan behavior carefully, can be expected to consult relevant legislation in advance of action.” . . . .  In this context, the relevant legal rule is not “so vague as to be ‘no rule or standard at all.’” . . . .  Subsection 45(n) [of the FTC Act, as a prerequisite to a finding of unfairness,] asks whether “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” While far from precise, this standard informs parties that the relevant inquiry here is a cost-benefit analysis, . . . that considers a number of relevant factors, including the probability and expected size of reasonably unavoidable harms to consumers given a certain level of cybersecurity and the costs to consumers that would arise from investment in stronger cybersecurity. We acknowledge there will be borderline cases where it is unclear if a particular companys conduct falls below the requisite legal threshold. But under a due process analysis a company is not entitled to such precision as would eliminate all close calls. . . .  Fair notice is satisfied here as long as the company can reasonably foresee that a court could construe its conduct as falling within the meaning of the statute. . . . 

[In addition, in 2007, the FTC issued a guidebook on business data security, which] could certainly have helped Wyndham determine in advance that its conduct might not survive the [§ 45(n)] cost-benefit analysis.  Before the [cybersecurity] attacks [on Wyndhams network], the FTC also filed complaints and entered into consent decrees in administrative cases raising unfairness claims based on inadequate corporate cybersecurity. . . .  That the FTC Commissioners – who must vote on whether to issue a complaint . . . – believe that alleged cybersecurity practices fail the cost-benefit analysis of § 45(n) certainly helps companies with similar practices apprehend the possibility that their cybersecurity could fail as well.  

In my view, a fair reading of this Third Circuit language is that:  (1) courts should read key provisions of the FTC Act to encompass cybersecurity practices that the FTC finds are not cost-beneficial; and (2) the FTC’s history of guidance and consent decrees regarding cybersecurity give sufficient notice to companies regarding the nature of cybersecurity plans that the FTC may challenge.   Based on that reading, I conclude that even if a court adopts a very exacting standard for reviewing the FTC’s interpretation of its own statute, the FTC is likely to succeed in future case-specific cybersecurity challenges, assuming that it builds a solid factual record that appears to meet cost-benefit analysis.  Whether other Circuits would agree with the Third Circuit’s analysis is, of course, open to debate (I myself suspect that they probably would).

3.     Sound Policy in Light of Wyndham

Apart from our slightly different “takes” on the legal implications of the Third Circuit’s Wyndham decision, I fully agree with Gus that, as a policy matter, the FTC’s “patchwork of consent decrees and informal statements is insufficient to the task” of building a general law of cybersecurity.  In a 2014 Heritage Foundation Legal Memorandum on the FTC and cybersecurity, I stated:

The FTCs regulation of business systems by decree threatens to stifle innovation by companies related to data security and to impose costs that will be passed on in part to consumers. Missing from the consent decree calculus is the question of whether the benefits in diminished data security breaches justify those costs—a question that should be at the heart of unfairness analysis. There are no indications that the FTC has even asked this question in fashioning data security consents, let alone made case-specific cost-benefit analyses. This is troubling.

Equally troubling is the that the FTC apparently expects businesses to divine from a large number of ad hoc, fact-specific consent decrees with varying provisions what they must do vis-à-vis data security to avoid possible FTC targeting. The uncertainty engendered by sole reliance on complicated consent decrees for guidance (in the absence of formal agency guidelines or litigated court decisions) imposes additional burdens on business planners. . . .

[D]ata security investigations that are not tailored to the size and capacity of the firm may impose competitive disadvantages on smaller rivals in industries in which data protection issues are paramount.

Moreover, it may be in the interest of very large firms to support costlier and more intrusive FTC data security initiatives, knowing that they can better afford the adoption of prohibitively costly data security protocols than their smaller competitors can. This is an example of a “raising rivalscosts” strategy, which reduces competition by crippling or eliminating rivals.

Given these and related concerns (including the failure of existing FTC reports to give appropriate guidance), I concluded, among other recommendations, that:

[T]he FTC should issue data security guidelines that clarify its enforcement policy regarding data security breaches pursuant to Section 5 of the Federal Trade Commission Act. Such guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses. They should studiously avoid dictating to industry the data security principles that firms should adopt. . . .

[T]he FTC should [also] employ a strict cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection, such as data brokerage or the uses of big data.

In sum, the Third Circuit’s Wyndham decision, while interesting, in no way alters the fact that the FTC’s existing cybersecurity enforcement program is inadequate and unsound.  Whether through guidelines or formal FTC rules (which carry their own costs, including the risk of establishing inflexible standards that ignore future changes in business conditions and technology), the FTC should provide additional guidance to the private sector, rooted in sound cost-benefit analysis.  The FTC should also be ever mindful of the costs it imposes on the economy (including potential burdens on business innovation) whenever it considers bringing enforcement actions in this area.

4.     Conclusion

The debate over the appropriate scope of federal regulation of business cybersecurity programs will continue to rage, as serious data breaches receive public attention and the FTC considers new initiatives.  Let us hope that, as we move forward, federal regulators will fully take into account costs as well as benefits – including, in particular, the risk that federal overregulation will undermine innovation, harm businesses, and weaken the economy.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Imagine

totmauthor —  27 August 2015

by Michael Baye, Bert Elwert Professor of Business at the Kelley School of Business, Indiana University, and former Director of the Bureau of Economics, FTC

Imagine a world where competition and consumer protection authorities base their final decisions on scientific evidence of potential harm. Imagine a world where well-intentioned policymakers do not use “possibility theorems” to rationalize decisions that are, in reality, based on idiosyncratic biases or beliefs. Imagine a world where “harm” is measured using a scientific yardstick that accounts for the economic benefits and costs of attempting to remedy potentially harmful business practices.

Many economists—conservatives and liberals alike—have the luxury of pondering this world in the safe confines of ivory towers; they publish in journals read by a like-minded audience that also relies on the scientific method.

Congratulations and thanks, Josh, for superbly articulating these messages in the more relevant—but more hostile—world outside of the ivory tower.

To those of you who might disagree with a few (or all) of Josh’s decisions, I challenge you to examine honestly whether your views on a particular matter are based on objective (scientific) evidence, or on your personal, subjective beliefs. Evidence-based policymaking can be discomforting: It sometimes induces those with philosophical biases in favor of intervention to make laissez-faire decisions, and it sometimes induces people with a bias for non-intervention to make decisions to intervene.