Archives For markets

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.

On November 9, pharmaceutical stocks soared as Donald Trump’s election victory eased concerns about government intervention in drug pricing. Shares of Pfizer rose 8.5%, Allergan PLC was up 8%, and biotech Celgene jumped 10.4%. Drug distributors also gained, with McKesson up 6.4% and Express Scripts climbing 3.4%. Throughout the campaign, Clinton had vowed to take on the pharmaceutical industry and proposed various reforms to reign in drug prices, from levying fines on drug companies that imposed unjustified price increases to capping patients’ annual expenditures on drugs. Pharmaceutical stocks had generally underperformed this year as the market, like much of America, awaited a Clinton victory.

In contrast, Trump generally had less to say on the subject of drug pricing, hence the market’s favorable response to his unexpected victory. Yet, as the end of the first post-election month draws near, we are still uncertain whether Trump is friend or foe to the pharmaceutical industry. Trump’s only proposal that directly impacts the industry would allow the government to negotiate the prices of Medicare Part D drugs with drug makers. Although this proposal would likely have little impact on prices because existing Part D plans already negotiate prices with drug makers, there is a risk that this “negotiation” could ultimately lead to price controls imposed on the industry. And as I have previously discussed, price controls—whether direct or indirect—are a bad idea for prescription drugs: they lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage, drug shortages in certain markets, and reduced incentives for innovation.

Several of Trump’s other health proposals have mixed implications for the industry. For example, a repeal or overhaul of the Affordable Care Act could eliminate the current tax on drug makers and loosen requirements for Medicaid drug rebates and Medicare part D discounts. On the other hand, if repealing the ACA reduces the number of people insured, spending on pharmaceuticals would fall. Similarly, if Trump renegotiates international trade deals, pharmaceutical firms could benefit from stronger markets or longer patent exclusivity rights, or they could suffer if foreign countries abandon trade agreements altogether or retaliate with disadvantageous terms.

Yet, with drug spending up 8.5 percent last year and recent pricing scandals launched by 500+ percentage increases in individual drugs (i.e., Martin Shkreli, Valeant Pharmaceuticals, Mylan), the current debate over drug pricing is unlikely to fade. Even a Republican-led Congress and White House is likely to heed the public outcry and do something about drug prices.

Drug makers would be wise to stave off any government-imposed price restrictions by voluntarily limiting price increases on important drugs. Major pharmaceutical company Allergan has recently done just this by issuing a “social contract with patients” that made several drug pricing commitments to its customers. Among other assurances, Allergan has promised to limit price increases to single-digit percentage increases and no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry. Last year throughout the pharmaceutical industry, the prices of the most commonly-used brand drugs increased by over 16 percent and, in the last two years before patent expiry, drug makers increased the list prices of drugs by an average of 35 percent. Thus, Allergan’s commitment will produce significant savings over the life of a product, creating hundreds of millions of dollars in savings to health plans, patients, and the health care system.

If Allergan can make this commitment for its entire drug inventory—over 80+ drugs—why haven’t other companies done the same? Similar commitments by other drug makers might be enough to prevent lawmakers from turning to market-distorting reforms, such as price controls, that could end up doing more harm than good for consumers, the pharmaceutical industry, and long-term innovation.

As Truth on the Market readers prepare to enjoy their Thanksgiving dinners, let me offer some (hopefully palatable) “food for thought” on a competition policy for the new Trump Administration.  In referring to competition policy, I refer not just to lawsuits directed against private anticompetitive conduct, but more broadly to efforts aimed at curbing government regulatory barriers that undermine the competitive process.

Public regulatory barriers are a huge problem.  Their costs have been highlighted by prestigious international research bodies such as the OECD and World Bank, and considered by the International Competition Network’s Advocacy Working Group.  Government-imposed restrictions on competition benefit powerful incumbents and stymie entry by innovative new competitors.  (One manifestation of this that is particularly harmful for American workers and denies job opportunities to millions of lower-income Americans is occupational licensing, whose increasing burdens are delineated in a substantial body of research – see, for example, a 2015 Obama Administration White House Report and a 2016 Heritage Foundation Commentary that explore the topic.)  Federal Trade Commission (FTC) and Justice Department (DOJ) antitrust officials should consider emphasizing “state action” lawsuits aimed at displacing entry barriers and other unwarranted competitive burdens imposed by self-interested state regulatory boards.  When the legal prerequisites for such enforcement actions are not met, the FTC and the DOJ should ramp up their “competition advocacy” efforts, with the aim of convincing state regulators to avoid adopting new restraints on competition – and, where feasible, eliminating or curbing existing restraints.

The FTC and DOJ also should be authorized by the White House to pursue advocacy initiatives whose goal is to dismantle or lessen the burden of excessive federal regulations (such advocacy played a role in furthering federal regulatory reform during the Ford and Carter Administrations).  To bolster those initiatives, the Trump Administration should consider establishing a high-level federal task force on procompetitive regulatory reform, in the spirit of previous reform initiatives.  The task force would report to the president and include senior level representatives from all federal agencies with regulatory responsibilities.  The task force could examine all major regulatory and statutory schemes overseen by Executive Branch and independent agencies, and develop a list of specific reforms designed to reduce federal regulatory impediments to robust competition.  Those reforms could be implemented through specific regulatory changes or legislative proposals, as the case might require.  The task force would have ample material to work with – for example, anticompetitive cartel-like output restrictions, such as those allowed under federal agricultural orders, are especially pernicious.  In addition to specific cartel-like programs, scores of regulatory regimes administered by individual federal agencies impose huge costs and merit particular attention, as documented in the Heritage Foundation’s annual “Red Tape Rising” reports that document the growing burden of federal regulation (see, for example, the 2016 edition of Red Tape Rising).

With respect to traditional antitrust enforcement, the Trump Administration should emphasize sound, empirically-based economic analysis in merger and non-merger enforcement.  They should also adopt a “decision-theoretic” approach to enforcement, to the greatest extent feasible.  Specifically, in developing their enforcement priorities, in considering case selection criteria, and in assessing possible new (or amended) antitrust guidelines, DOJ and FTC antitrust enforcers should recall that antitrust is, like all administrative systems, inevitably subject to error costs.  Accordingly, Trump Administration enforcers should be mindful of the outstanding insights provide by Judge (and Professor) Frank Easterbrook on the harm from false positives in enforcement (which are more easily corrected by market forces than false negatives), and by Justice (and Professor) Stephen Breyer on the value of bright line rules and safe harbors, supported by sound economic analysis.  As to specifics, the DOJ and FTC should issue clear statements of policy on the great respect that should be accorded the exercise of intellectual property rights, to correct Obama antitrust enforcers’ poor record on intellectual property protection (see, for example, here).  The DOJ and the FTC should also accord greater respect to the efficiencies associated with unilateral conduct by firms possessing market power, and should consider reissuing an updated and revised version of the 2008 DOJ Report on Single Firm Conduct).

With regard to international competition policy, procedural issues should be accorded high priority.  Full and fair consideration by enforcers of all relevant evidence (especially economic evidence) and the views of all concerned parties ensures that sound analysis is brought to bear in enforcement proceedings and, thus, that errors in antitrust enforcement are minimized.  Regrettably, a lack of due process in foreign antitrust enforcement has become a matter of growing concern to the United States, as foreign competition agencies proliferate and increasingly bring actions against American companies.  Thus, the Trump Administration should make due process problems in antitrust a major enforcement priority.  White House-level support (ensuring the backing of other key Executive Branch departments engaged in foreign economic policy) for this priority may be essential, in order to strengthen the U.S. Government’s hand in negotiations and consultations with foreign governments on process-related concerns.

Finally, other international competition policy matters also merit close scrutiny by the new Administration.  These include such issues as the inappropriate imposition of extraterritorial remedies on American companies by foreign competition agencies; the harmful impact of anticompetitive foreign regulations on American businesses; and inappropriate attacks on the legitimate exercise of intellectual property by American firms (in particular, American patent holders).  As in the case of process-related concerns, White House attention and broad U.S. Government involvement in dealing with these problems may be essential.

That’s all for now, folks.  May you all enjoy your turkey and have a blessed Thanksgiving with friends and family.

Last week, the Internet Association (“IA”) — a trade group representing some of America’s most dynamic and fastest growing tech companies, including the likes of Google, Facebook, Amazon, and eBay — presented the incoming Trump Administration with a ten page policy paper entitled “Policy Roadmap for New Administration, Congress.”

The document’s content is not surprising, given its source: It is, in essence, a summary of the trade association’s members’ preferred policy positions, none of which is new or newly relevant. Which is fine, in principle; lobbying on behalf of members is what trade associations do — although we should be somewhat skeptical of a policy document that purports to represent the broader social welfare while it advocates for members’ preferred policies.

Indeed, despite being labeled a “roadmap,” the paper is backward-looking in certain key respects — a fact that leads to some strange syntax: “[the document is a] roadmap of key policy areas that have allowed the internet to grow, thrive, and ensure its continued success and ability to create jobs throughout our economy” (emphasis added). Since when is a “roadmap” needed to identify past policies? Indeed, as Bloomberg News reporter, Joshua Brustein, wrote:

The document released Monday is notable in that the same list of priorities could have been sent to a President-elect Hillary Clinton, or written two years ago.

As a wishlist of industry preferences, this would also be fine, in principle. But as an ostensibly forward-looking document, aimed at guiding policy transition, the IA paper is disappointingly un-self-aware. Rather than delineating an agenda aimed at improving policies to promote productivity, economic development and social cohesion throughout the economy, the document is overly focused on preserving certain regulations adopted at the dawn of the Internet age (when the internet was capitalized). Even more disappointing given the IA member companies’ central role in our contemporary lives, the document evinces no consideration of how Internet platforms themselves should strive to balance rights and responsibilities in new ways that promote meaningful internet freedom.

In short, the IA’s Roadmap constitutes a policy framework dutifully constructed to enable its members to maintain the status quo. While that might also serve to further some broader social aims, it’s difficult to see in the approach anything other than a defense of what got us here — not where we go from here.

To take one important example, the document reiterates the IA’s longstanding advocacy for the preservation of the online-intermediary safe harbors of the 20 year-old Digital Millennium Copyright Act (“DMCA”) — which were adopted during the era of dial-up, and before any of the principal members of the Internet Association even existed. At the same time, however, it proposes to reform one piece of legislation — the Electronic Communications Privacy Act (“ECPA”) — precisely because, at 30 years old, it has long since become hopelessly out of date. But surely if outdatedness is a justification for asserting the inappropriateness of existing privacy/surveillance legislation — as seems proper, given the massive technological and social changes surrounding privacy — the same concern should apply to copyright legislation with equal force, given the arguably even-more-substantial upheavals in the economic and social role of creative content in society today.

Of course there “is more certainty in reselling the past, than inventing the future,” but a truly valuable roadmap for the future from some of the most powerful and visionary companies in America should begin to tackle some of the most complicated and nuanced questions facing our country. It would be nice to see a Roadmap premised upon a well-articulated theory of accountability across all of the Internet ecosystem in ways that protect property, integrity, choice and other essential aspects of modern civil society.

Each of IA’s companies was principally founded on a vision of improving some aspect of the human condition; in many respects they have succeeded. But as society changes, even past successes may later become inconsistent with evolving social mores and economic conditions, necessitating thoughtful introspection and, often, policy revision. The IA can do better than pick and choose from among existing policies based on unilateral advantage and a convenient repudiation of responsibility.

Neil TurkewitzTruth on the Market is delighted to welcome our newest blogger, Neil Turkewitz. Neil is the newly minted Senior Policy Counsel at the International Center for Law & Economics (so we welcome him to ICLE, as well!).

Prior to joining ICLE, Neil spent 30 years at the Recording Industry Association of America (RIAA), most recently as Executive Vice President, International.

Neil has spent most of his career working to expand economic opportunities for the music industry through modernization of copyright legislation and effective enforcement in global markets. He has worked closely with creative communities around the globe, with the US and foreign governments, and with international organizations (including WIPO and the WTO), to promote legal and enforcement reforms to respond to evolving technology, and to promote a balanced approach to digital trade and Internet governance premised upon the importance of regulatory coherence, elimination of inefficient barriers to global communications, and respect for Internet freedom and the rule of law.

Among other things, Neil was instrumental in the negotiation of the WTO TRIPS Agreement, worked closely with the US and foreign governments in the negotiation of free trade agreements, helped to develop the OECD’s Communique on Principles for Internet Policy Making, coordinated a global effort culminating in the production of the WIPO Internet Treaties, served as a formal advisor to the Secretary of Commerce and the USTR as Vice-Chairman of the Industry Trade Advisory Committee on Intellectual Property Rights, and served as a member of the Board of the Chamber of Commerce’s Global Intellectual Property Center.

You can read some of his thoughts on Internet governance, IP, and international trade here and here.

Welcome Neil!

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Mylan Pharmaceuticals recently reinvigorated the public outcry over pharmaceutical price increases when news surfaced that the company had raised the price of EpiPens by more than 500% over the past decade and, purportedly, had plans to increase the price even more. The Mylan controversy comes on the heels of several notorious pricing scandals last year. Recall Valeant Pharmaceuticals, that acquired cardiac drugs Isuprel and Nitropress and then quickly raised their prices by 525% and 212%, respectively. And of course, who can forget Martin Shkreli of Turing Pharmaceuticals, who increased the price of toxoplasmosis treatment Daraprim by 5,000% and then claimed he should have raised the price even higher.

However, one company, pharmaceutical giant Allergan, seems to be taking a different approach to pricing.   Last week, Allergan CEO Brent Saunders condemned the scandalous pricing increases that have raised suspicions of drug companies and placed the entire industry in the political hot seat. In an entry on the company’s blog, Saunders issued Allergan’s “social contract with patients” that made several drug pricing commitments to its customers.

Some of the most important commitments Allergan made to its customers include:

  • A promise to not increase prices more than once a year, and to limit price increases to singe-digit percentage increases.
  • A pledge to improve patient access to Allergan medications by enhancing patient assistance programs in 2017
  • A vow to cooperate with policy makers and payers (including government drug plans, private insurers, and pharmacy benefit managers) to facilitate better access to Allergan products by offering pricing discounts and paying rebates to lower drug costs.
  • An assurance that Allergan will no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry, without cost increases that justify the increase.
  • A commitment to provide annual updates on how pricing affects Allergan’s business.
  • A pledge to price Allergan products in a way that is commensurate with, or lower than, the value they create.

Saunders also makes several non-pricing pledges to maintain a continuous supply of its drugs, diligently monitor the safety of its products, and appropriately educate physicians about its medicines. He also makes the point that the recent pricing scandals have shifted attention away from the vibrant medical innovation ecosystem that develops new life-saving and life-enhancing drugs. Saunders contends that the focus on pricing by regulators and the public has incited suspicions about this innovation ecosystem: “This ecosystem can quickly fall apart if it is not continually nourished with the confidence that there will be a longer term opportunity for appropriate return on investment in the long R&D journey.”

Policy-makers and the public would be wise to focus on the importance of brand drug innovation. Brand drug companies are largely responsible for pharmaceutical innovation. Since 2000, brand companies have spent over half a trillion dollars on R&D, and they currently account for over 90 percent of the spending on the clinical trials necessary to bring new drugs to market. As a result of this spending, over 550 new drugs have been approved by the FDA since 2000, and another 7,000 are currently in development globally. And this innovation is directly tied to health advances. Empirical estimates of the benefits of pharmaceutical innovation indicate that each new drug brought to market saves 11,200 life-years each year.  Moreover, new drugs save money by reducing doctor visits, hospitalizations, and other medical procedures, ultimately for every $1 spent on new drugs, total medical spending decreases by more than $7.

But, as Saunders suggests, this innovation depends on drugmakers earning a sufficient return on their investment in R&D. The costs to bring a new drug to market with FDA approval are now estimated at over $2 billion, and only 1 in 10 drugs that begin clinical trials are ever approved by the FDA. Brand drug companies must price a drug not only to recoup the drug’s own costs, they must also consider the costs of all the product failures in their pricing decisions. However, they have a very limited window to recoup these costs before generic competition destroys brand profits: within three months of the first generic entry, generics have already captured over 70 percent of the brand drugs’ market. Drug companies must be able to price drugs at a level where they can earn profits sufficient to offset their R&D costs and the risk of failures. Failure to cover these costs will slow investment in R&D; drug companies will not spend millions and billions of dollars developing drugs if they cannot recoup the costs of that development.

Yet several recent proposals threaten to control prices in a way that could prevent drug companies from earning a sufficient return on their investment in R&D. Ultimately, we must remember that a social contract involves commitment from all members of a group; it should involve commitments from drug companies to price responsibly, and commitments from the public and policy makers to protect innovation. Hopefully, more drug companies will follow Allergan’s lead and renounce the exorbitant price increases we’ve seen in recent times. But in return, we should all remember that innovation and, in turn, health improvements, depend on drug companies’ profitability.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.