Archives For truth on the market

While we all wait on pins and needles for the DC Circuit to issue its long-expected ruling on the FCC’s Open Internet Order, another federal appeals court has pushed back on Tom Wheeler’s FCC for its unremitting “just trust us” approach to federal rulemaking.

The case, round three of Prometheus, et al. v. FCC, involves the FCC’s long-standing rules restricting common ownership of local broadcast stations and their extension by Tom Wheeler’s FCC to the use of joint sales agreements (JSAs). (For more background see our previous post here). Once again the FCC lost (it’s now only 1 for 3 in this case…), as the Third Circuit Court of Appeals took the Commission to task for failing to establish that its broadcast ownership rules were still in the public interest, as required by law, before it decided to extend those rules.

While much of the opinion deals with the FCC’s unreasonable delay (of more than 7 years) in completing two Quadrennial Reviews in relation to its diversity rules, the court also vacated the FCC’s rule expanding its duopoly rule (or local television ownership rule) to ban joint sales agreements without first undertaking the reviews.

We (the International Center for Law and Economics, along with affiliated scholars of law, economics, and communications) filed an amicus brief arguing for precisely this result, noting that

the 2014 Order [] dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest.

The Third Circuit agreed that the FCC utterly failed to justify its continued foray into banning potentially pro-competitive arrangements, finding that

the Commission violated § 202(h) by expanding the reach of the ownership rules without first justifying their preexisting scope through a Quadrennial Review. In Prometheus I we made clear that § 202(h) requires that “no matter what the Commission decides to do to any particular rule—retain, repeal, or modify (whether to make more or less stringent)—it must do so in the public interest and support its decision with a reasoned analysis.” Prometheus I, 373 F.3d at 395. Attribution of television JSAs modifies the Commission’s ownership rules by making them more stringent. And, unless the Commission determines that the preexisting ownership rules are sound, it cannot logically demonstrate that an expansion is in the public interest. Put differently, we cannot decide whether the Commission’s rationale—the need to avoid circumvention of ownership rules—makes sense without knowing whether those rules are in the public interest. If they are not, then the public interest might not be served by closing loopholes to rules that should no longer exist.

Perhaps this decision will be a harbinger of good things to come. The FCC — and especially Tom Wheeler’s FCC — has a history of failing to justify its rules with anything approaching rigorous analysis. The Open Internet Order is a case in point. We will all be better off if courts begin to hold the Commission’s feet to the fire and throw out their rules when the FCC fails to do the work needed to justify them.

In a 2015 Heritage Foundation Backgrounder, I argued for a reform of the United States antidumping (AD) law, which allows for the imposition of additional tariffs on “unfairly” low-priced imports.  Although the original justification for American AD law was to prevent anticompetitive predation by foreign producers, I explained that the law as currently designed and applied instead diminishes competition in American industries affected by AD tariffs and reduces economic welfare.  I argued that modification of U.S. AD law to incorporate an antitrust predatory pricing standard would strengthen the American economy and benefit U.S. consumers while precluding any truly predatory dumping designed to destroy domestic industries and monopolize American industrial sectors.

A recent economic study supported by the World Bank and released by the European University Institute confirms that the global proliferation of AD laws in recent decades raises serious competitive concerns.  The study concludes:

Over a century, antidumping has gradually evolved from an obscure and rarely used policy tool to one that now constitutes an important form of protection not subject to the same WTO [World Trade Organization] controls as members’ bound tariff rates. Rather, antidumping is one of several instruments that allow members to exceed their bound tariffs, albeit subject to very detailed WTO procedural disciplines. Moreover, while the application of antidumping was until the WTO era mainly the province of a few traditional users, emerging markets have become some of the most active users of antidumping and related policies as well as important targets of their application. And though these policies are known collectively as temporary trade barriers, WTO rules governing the duration of antidumping measures are much weaker than for safeguards.

As antidumping use has evolved and proliferated (about 50 countries now have antidumping statutes although some are not active users), both its economic justification and the concerns raised by its possible abuse have also evolved. While the original justification of antidumping was to protect importing countries from predation by foreign suppliers, by the 1980s antidumping had come to be regarded as just another tool in the protectionist arsenal. Even more worrying, evidence began to mount that antidumping was being used in ways that actually enforced collusion and cartel arrangements rather than attacking anticompetitive behavior.

Today’s world economy and international trading system are much different even from those of the early 1990s, when this concern reached its peak. Some changes, in particular the significant growth in the number of countries and firms actively engaged in international trade, tend to limit the possibility of predation by exporters. Moreover, antidumping has developed a political-economic justification as a tool that can help countries manage the internal stresses associated with openness. But other changes, especially the important role of multinational firms and intra-firm trade and the increased use by many countries of policies to limit exports, suggest that concerns about anticompetitive behavior by exporters cannot be entirely dismissed. Vigilance to ensure that antidumping is not abused by complainants to achieve and exploit market power thus remains appropriate today.

In sum, the study reveals that anticompetitive misuse of AD law has become a serious international problem, but, because the potential still remains for occasional predatory use of dumping (China is discussed in that regard), what is called for is appropriate monitoring of the actual application of AD laws.

Building on the study’s conclusion, the best way of monitoring AD laws to ensure that they were employed in a procompetitive fashion would be the redesign of those statutes to adopt a procompetitive antitrust predatory-pricing standard, as recommended in my 2015 Backgrounder.  Such an approach would tend to minimize error costs by providing a straightforward methodology to readily identify actual cases of foreign predation, and to quickly reject unjustified AD complaints.

This in turn suggests that a new Administration interested in truly welfare-enhancing international trade reform could press for redesign of the WTO Antidumping Agreement to require that WTO-conforming AD laws satisfy antitrust-based predation principles.  Initially, a more modest effort might be to work with like-minded nations for the consideration of plurilateral agreements whereby the signatories would agree to conform their AD laws to antitrust predation standards.  Simultaneously, of course, the new Administration would have to make the case to Congress that such an antitrust-based reform of American AD law made good economic sense.

American AD reform along these lines would represent a rejection of crony capitalism and endorsement of a consumer welfare-based approach to international trade law – an approach that would strengthen the economy and ultimately benefit American consumers and producers alike.  It would also reinforce the role of the United States as the leader of the effort to liberalize international trade and thereby promote global economic growth.  (Moreover, to the extent foreign nations adopted the proposed AD reform, American exporters would directly benefit by being afforded new opportunities to compete in foreign markets.)

I have previously written at this site (see here, here, and here) and elsewhere (see here, here, and here) about the problem of anticompetitive market distortions (ACMDs), government-supported (typically crony capitalist) rules that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  On May 17, the Heritage Foundation hosted a presentation by Shanker Singham of the Legatum Institute (a London think tank) and me on recent research and projects aimed at combatting ACMDs.

Singham began his remarks by noting that from the late 1940s to the early 1990s, trade negotiations under the auspices of the General Agreement on Tariffs and Trade (GATT) (succeeded by the World Trade Organization (WTO)), were highly successful in reducing tariffs and certain non-tariff barriers, and in promoting agreements to deal with trade-related aspects of such areas as government procurement, services, investment, and intellectual property, among others.  Regrettably, however, liberalization of trade restraints at the border was not matched by procompetitive regulatory reform inside borders.  Indeed, to the contrary, ACMDs have continued to proliferate, harming competition, consumers, and economic welfare.  As Singham further explained, the problem is particularly acute in developing countries:  “Because of the failure of early [regulatory] reform in the 1990s which empowered oligarchs and created vested interests in the whole of the developing world, national level reform is extremely difficult.”

To highlight the seriousness of the ACMD problem, Singham and several colleagues have developed a proprietary “Productivity Simulator,” that focuses on potential national economic output based on measures of the effectiveness of domestic competition, international competition, and property rights protections within individual nations.  (The stronger the protections, the greater the potential of the free market to create wealth.)   The Productivity Simulator is able to show, with a regressed accuracy of 90%, the potential gains of reducing distortions in a given country.  Every country has its own curve in the Productivity Simulator – it is a curve because the gains are exponential as one moves to the most difficult reforms.  If all distortions in the world were eliminated (aka, the ceiling of human potential), the Simulator predicts global GDP would rise by 1100% (a conservative estimate, because the Simulator could not be applied to certain very regulatorily-distorted economies for which data were unavailable).   By illustrating the huge “dollars and cents” magnitude of economic losses due to anticompetitive distortions, the Simulator could make the ACMD problem more concrete and thereby help invigorate reform efforts.

Singham also has adapted his Simulator technique to demonstrate the potential for economic growth in proposed “Enterprise Cities” (“e-Cities”), free-market oriented zones within a country that avoid ACMDs and provide strong property rights and rule of law protections.  (Existing city states such as Hong Kong, Singapore, and Dubai already possess e-City characteristics.)  Individual e-City laws, regulations, and dispute-resolution mechanisms are negotiated between individual governments and entrepreneurial project teams headed by Singham.  (Already, potential e-cities are under consideration in Morocco, Saudi Arabia, Saudi Arabia, Bosnia & Herzegovina, and Somalia.)  Private investors would be attracted to e-Cities due to their free market regulatory climate and legal protections.  To the extent that e-Cities are launched and thrive, they may serve as “demonstration projects” for the welfare benefits of dismantling ACMDs.

Following Singham’s presentation, I discussed analyses of the ACMD problem carried out in recent years by major international organizations, including the World Bank, the Organization for Economic Cooperation and Development (OECD, an economic think tank funded by developed countries), and the International Competition Network (a network of national competition agencies and experts legal and economic advisers that produces non-binding “best practices” recommendations dealing with competition law and policy).  The OECD’s  “Competition Assessment Toolkit” is a how-to manual for ferreting out ACMDs – it “helps governments to eliminate barriers to competition by providing a method for identifying unnecessary restraints on market activities and developing alternative, less restrictive measures that still achieve government policy objectives.”  The OECD has used the Toolkit to demonstrate the huge economic cost to the Greek economy (5.2 billion euros) of just a very small subset of anticompetitive regulations.  The ICN has drawn on Toolkit principles in developing “Recommended Practices on Competition Assessment” that national competition agencies can apply in opposing ACMDs.  In a related vein, the ICN has also produced a “Competition Culture Project Report” that provides useful survey-based analysis competition agencies could draw upon to generate public support for dismantling ACMDs.  The World Bank has cooperated with ICN advocacy efforts.  It has sponsored annual World Bank forums featuring industry-specific studies of the costs of regulatory restrictions, held in conjunction with ICN annual conferences, and (beginning in 2015).  It also has joined with the ICN in supporting annual “competition advocacy contests” in which national competition agencies are able to highlight economic improvements due to specific regulatory reform successes.  Developed countries also suffer from ACMDs.  For example, occupational licensing restrictions in the United States affect over a quarter of the work force, and, according to a 2015 White House Report, “licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines.”  Moreover, the multibillion dollar cost burden of federal regulations continues to grow rapidly, as documented by the Heritage Foundation’s annual “Red Tape Rising” reports.

I closed my presentation by noting that statutory international trade law reforms operating at the border could complement efforts to reduce regulatory burdens operating inside the border.  In particular, I cited my 2015 Heritage study recommending that United States antidumping law be revised to adopt a procompetitive antitrust-based standard (in contrast to the current approach that serves as an unjustified tax on certain imports).  I also noted the importance of ensuring that trade laws protect against imports that violate intellectual property rights, because such imports undermine competition on the merits.

In sum, the effort to reduce the burdens of ACMDs continue to be pursued and to be highlighted in research, proposed demonstration projects, and efforts to spur regulatory reform.  This is a long-term initiative very much worth pursuing, even though its near-term successes may prove minor at best.

As we noted in our issue brief on the impending ICANN transition, given the vast scope of the problem, voluntary relationships between registries, registrars and private industry will be a critical aspect of controlling online piracy. Last week the MPAA and registry operator Radix announced a new “trusted notifier” program under which the MPAA will be permitted to submit evidence of large-scale piracy occurring in Radix-managed top-level domains.

In many respects, this resembles the program that the MPAA and Donuts established in February— however as the first non-U.S. based program, this is a major step forward. As in the Donuts agreement, the new program will contain a number of procedural safeguards, including a requirement that clear evidence of pervasive infringement is occurring, along with a document attesting to the fact that the MPAA first attempted to resolve the situation with the name registrar directly. If, after attempting to work with its associated registrars to contact the website owner, Radix determines that the website is engaged illegal conduct it will either place the domain name on hold or else suspend it entirely.

These sorts of self-help agreements are really crucial to the future of Internet governance, and not merely for their facilitation of removing infringing content. Once ICANN becomes an independent organization that is completely untethered from the U.S. Government, it will be up to the community at large to maintain the credibility of DNS management.

And the importance of these self-help agreements is particularly acute in light of ICANN’s long standing refusal to enforce its own contractual restrictions in place with registries and registrars. As we noted in our brief:

Very likely, [ICANN’s governance structure] will be found through voluntary, private arrangements between registries, registrars, and third parties. An overarching commitment to enforcing legitimate contracts, therefore, even ones that espouse particular policy objectives, will be a core attribute of a well-organized ICANN.

In fact, far and away ICANN’s most significant failing has been the abdication of its responsibility to enforce the terms of its own contracts, particularly the Registrar Accreditation Agreement. The effect of this obstinance is that ICANN has failed to exercise its obligation to maintain a “secure, stable, [and] resilient… Internet” free of costly “pollutants” like piracy, illegal prescription drugs, and phishing sites that impose significant costs on others with relative impunity.

In March, ICANN submitted its stewardship proposal to Congress — a document that outlines how ICANN proposes to operate as an independent organization. Much criticism of the transition has focused on the possibility of authoritarian regimes co-opting the root zone file and related DNS activities. It’s in everyone’s interest to prevent clearly illegal conduct from occurring online. Otherwise, without a minimal standard of governance, the arguments for a multi-lateral government run Internet become much easier to advance.

And certainly a big part of Congress’s consideration of the transition will be whether ICANN can plausibly continue to operate as a legitimate steward of the DNS. When registries step forward and agree to maintain at least a minimal baseline of pro-social conduct, it goes a long way toward moving the transition forward and guaranteeing a free and open Internet for the future.

Trade secrets are frequently one of the most powerful forms of intellectual property that a company has in its competitive arsenal. Particularly given the ongoing interest in whittling away at the property rights of patent holders (e.g. the enhanced IPR process, and even the more tame VENUE Act), trade secrets are a critical means for firms to obtain and retain advantages in highly competitive markets.

Yet, historically the scope of federal recognition of these quasi-property rights was exceedingly circumscribed. That is until yesterday when President Obama signed the Defend Trade Secrets Act (“DTSA”) into law. The Act is designed to create a uniform body of federal law that will allow jurisdiction-straddling entities to more effectively enforce their often very valuable interests in proprietary information. Despite the handful of critics of this effort over the last few years, the law passed Congress with minimal friction, and, at least at this early stage, seems like a fairly laudable step in the right direction.

The Act contains a number of important provisions, including providing uniform federal jurisdiction over trade secret actions across the United States, the potential for civil seizure of instrumentalities of misappropriation when injunctions would be insufficient, a clear damages calculation and recovery of fees, and certain safeguards that protect employees from suit when switching employers or engaging in whistleblowing.

A few of the provisions of the law are particularly interesting and bear some examination, as they will undoubtedly be hot spots for litigation in the years to come.

First, the DTSA does not preempt existing state trade secret laws. Instead it creates a federal overlay as a separate cause of action. The critics believe that this gives plaintiffs too much power insofar as they can now pick and choose whether to pursue a claim in state or federal court. Further — and this criticism I take more seriously — adding a federal law doesn’t do much to clarify the ways that an individual might run afoul of trade secret law. If anything it marginally increases uncertainty as there is now one more law to consider on top of all of the state trade secret laws.

Nonetheless, even though a company is free to bring both state and federal trade secret actions against an individual — and likely will do so when there is a misappropriation — I’m not sure why this is a bad thing. If a company sues a would-be spy, the point is not to bury them in protracted litigation, so much as it is to keep them from immediately fleeing to a foreign jurisdiction with valuable information. Thus, the federal jurisdiction provides a more expedient tool that steps around the inherent latency in obtaining an order from one state court that subsequently one or more other state courts need to recognize and enforce in order to prevent the release of the information.

And when a suit is brought between two companies, it seems hard to believe that an additional federal claim on top of a state claim will really be the difference between life or death for the companies. The litigation would be expensive and time consuming whether or not the federal claim exists, and in all likelihood the discovery and legal arguments will end up being fairly identical (the DTSA is modeled, more or less, after the Uniform Trade Secrets Act, which has been adopted to varying extent by 48 states).

Second, under “extraordinary circumstances,” the DTSA allows for an ex parte court-ordered civil seizure of any misappropriated trade secrets, or property associated with the theft (e.g. computers, flash drives, etc.). And the relevant question here is, of course, just how “extraordinary” must an “extraordinary circumstance” be ? Likely, very extraordinary.

In this era of networked devices, why would a defendant who seeks to steal trade secrets not immediately transfer the valuable information to an offshore server? I’m sure there have to be instances where such a transfer fails to take place — perhaps in an effort to evade detection an individual might strictly keep information on a thumb drive, thus making civil seizure a good option. Still, I don’t quite grasp the utility of this provision beyond a really narrow set of circumstances, particularly given the equitable powers that district courts already have.

Also, the aforementioned critics essentially agree with this point, notwithstanding having pointed it out as a problem. They described the provision as possibly “superfluous” since a plaintiff needs to make a showing that Rule 65(b) preliminary relief would be inadequate. I am as big a fan of property rights as the next classical liberal, but I have trouble seeing how this provision will end up being a net negative.

Courts are generally reluctant to seize property when there are other forms of relief available, and given the fact that any proprietary information will most likely get out instantly anyway, it seems basically impossible, under most claims that would be brought, to get a seizure order that would have any effect.

What’s left, then, are very narrow, rare circumstances in which a judge really sees an urgent need to seize property. And, likely, in the very few cases where seizure will be appropriate, the plaintiffs most emphatically won’t regard the provision as superfluous, while in the overwhelming majority of cases, defendants needn’t fear the provision at all.

One of the more prominent concerns of critics is that the federal law will be a tool with which to control or punish former employees as they move on to work for competitors. However, even this concern appears overblown. Professor Sharon Sandeen, for example, believes that the Act will create “trade secret trolls” who will be able to ruin the careers of former employees (although, in her testimony she doesn’t exactly spell out how the DTSA in particular facilitates this, and existing state laws do not). Nonetheless, the DTSA contains a provision that disallows enforcement against individuals under the “inevitable disclosure” doctrine. That doctrine, sometimes allowed in state courts, provides former employers with the ability to seek damages and injunctions when a former employee goes to work for a competitor and, during the course of that new employment, it is “inevitable” that trade secrets would be disclosed. I haven’t done extended research on that doctrine, but at least its inability to be applied to DTSA claims seems to answer critics’ concerns reasonably well.

On the whole, the law seems aimed at helping companies that depend upon trade secrets to vindicate their interests in a timely and effective manner, and with minimal downside to employees. Although it is somewhat perplexing that the law does not displace state laws — certainly that would have added a degree of clarity. If anything, the DTSA provides for an extension of trade secret protection that Congress already began in 1996 with the Economic Espionage Act. That Act, a criminal law, makes it a crime punishable by a fine and up to ten years in prison when an individual misappropriates trade secrets when undertaken in connection with a foreign power. The shortcoming in that law, however, are obvious: (1) it requires the involvement of a foreign government, which is just not the common case for industrial espionage, and (2) it relies on a federal prosecutor to take up the case. The DTSA, on the other hand, gives companies what seems like a long overdue federal right to curb similar behavior in the more ordinary circumstance.

The lifecycle of a law is a curious one; born to fanfare, a great solution to a great problem, but ultimately doomed to age badly as lawyers seek to shoehorn wholly inappropriate technologies and circumstances into its ambit. The latest chapter in the book of badly aging laws comes to us courtesy of yet another dysfunctional feature of our political system: the Supreme Court nomination and confirmation process.

In 1988, President Reagan nominated Judge Bork for a spot on the US Supreme Court. During the confirmation process following his nomination, a reporter was able to obtain a list of videos he and his family had rented from local video rental stores (You remember those, right?). In response to this invasion of privacy — by a reporter whose intention was to publicize and thereby (in some fashion) embarrass or “expose” Judge Bork — Congress enacted the Video Privacy Protection Act (“VPPA”).

In short, the VPPA makes it illegal for a “video tape service provider” to knowingly disclose to third parties any “personally identifiable information” in connection with the viewing habits of a “consumer” who uses its services. Left as written and confined to the scope originally intended for it, the Act seems more or less fine. However, over the last few years, plaintiffs have begun to use the Act as a weapon with which to attack common Internet business models in a manner wholly out of keeping with drafters’ intent.

And with a decision that promises to be a windfall for hungry plaintiff’s attorneys everywhere, the First Circuit recently allowed a plaintiff, Alexander Yershov, to make it past a 12(b)(6) motion on a claim that Gannett violated the VPPA with its  USA Today Android mobile app.

What’s in a name (or Android ID) ?

The app in question allowed Mr. Yershov to view videos without creating an account, providing his personal details, or otherwise subscribing (in the generally accepted sense of the term) to USA Today’s content. What Gannett did do, however, was to provide to Adobe Systems the Android ID and GPS location data associated with Mr. Yershov’s use of the app’s video content.

In interpreting the VPPA in a post-Blockbuster world, the First Circuit panel (which, apropos of nothing, included retired Justice Souter) had to wrestle with whether Mr. Yershov counts as a “subscriber,” and to what extent an Android ID and location information count as “personally identifying information” under the Act. Relying on the possibility that Adobe might be able to infer the identity of the plaintiff given its access to data from other web properties, and given the court’s rather gut-level instinct that an app user is a “subscriber,” the court allowed the plaintiff to survive the 12(b)(6) motion.

The PII point is the more arguable of the two, as the statutory language is somewhat vague. Under the Act, PIII “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” On this score the court decided that GPS data plus an Android ID (or each alone — it wasn’t completely clear) could constitute information protected under the Act (at least for purposes of a 12(b)(6) motion):

The statutory term “personally identifiable information” is awkward and unclear. The definition of that term… adds little clarity beyond training our focus on the question whether the information identifies the person who obtained the video…. Nevertheless, the language reasonably conveys the point that PII is not limited to information that explicitly names a person.

OK (maybe). But where the court goes off the rails is in its determination that an Android ID, GPS data, or a list of videos is, in itself, enough to identify anyone.

It might be reasonable to conclude that Adobe could use that information in combination with other information it collects from yet other third parties (fourth parties?) in order to build up a reliable, personally identifiable profile. But the statute’s language doesn’t hang on such a combination. Instead, the court’s reasoning finds potential liability by reading this exact sort of prohibition into the statute:

Adobe takes this and other information culled from a variety of sources to create user profiles comprised of a given user’s personal information, online behavioral data, and device identifiers… These digital dossiers provide Adobe and its clients with “an intimate look at the different types of materials consumed by the individual” … While there is certainly a point at which the linkage of information to identity becomes too uncertain, or too dependent on too much yet-to-be-done, or unforeseeable detective work, here the linkage, as plausibly alleged, is both firm and readily foreseeable to Gannett.

Despite its hedging about uncertain linkages, the court’s reasoning remains contingent on an awful lot of other moving parts — something not found in either the text of the law, nor the legislative history of the Act.

The information sharing identified by the court is in no way the sort of simple disclosure of PII that easily identifies a particular person in the way that, say, Blockbuster Video would have been able to do in 1988 with disclosure of its viewing lists.  Yet the court purports to find a basis for its holding in the abstract nature of the language in the VPPA:

Had Congress intended such a narrow and simple construction [as specifying a precise definition for PII], it would have had no reason to fashion the more abstract formulation contained in the statute.

Again… maybe. Maybe Congress meant to future-proof the provision, and didn’t want the statute construed as being confined to the simple disclosure of name, address, phone number, and so forth. I doubt, though, that it really meant to encompass the sharing of any information that might, at some point, by some unknown third parties be assembled into a profile that, just maybe if you squint at it hard enough, will identify a particular person and their viewing habits.

Passive Subscriptions?

What seems pretty clear, however, is that the court got it wrong when it declared that Mr. Yershov was a “subscriber” to USA Today by virtue of simply downloading an app from the Play Store.

The VPPA prohibits disclosure of a “consumer’s” PII — with “consumer” meaning “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” In this case (as presumably will happen in most future VPPA cases involving free apps and websites), the plaintiff claims that he is a “subscriber” to a “video tape” service.

The court built its view of “subscriber” predominantly on two bases: (1) you don’t need to actually pay anything to count as a subscriber (with which I agree), and (2) that something about installing an app that can send you push notifications is different enough than frequenting a website, that a user, no matter how casual, becomes a “subscriber”:

When opened for the first time, the App presents a screen that seeks the user’s permission for it to “push” or display notifications on the device. After choosing “Yes” or “No,” the user is directed to the App’s main user interface.

The court characterized this connection between USA Today and Yershov as “seamless” — ostensibly because the app facilitates push notifications to the end user.

Thus, simply because it offers an app that can send push notifications to users, and because this app sometimes shows videos, a website or Internet service — in this case, an app portal for a newspaper company — becomes a “video tape service,” offering content to “subscribers.” And by sharing information in a manner that is nowhere mentioned in the statute and that on its own is not capable of actually identifying anyone, the company suddenly becomes subject to what will undoubtedly be an avalanche of lawsuits (at least in the first circuit).

Preposterous as this may seem on its face, it gets worse. Nothing in the court’s opinion is limited to “apps,” and the “logic” would seem to apply to the general web as well (whether the “seamless” experience is provided by push notifications or some other technology that facilitates tighter interaction with users). But, rest assured, the court believes that

[B]y installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.

Thank goodness it’s “materially” different… although just going by the reasoning in this opinion, I don’t see how that can possibly be true.

What happens when web browsers can enable push notifications between users and servers? Well, I guess we’ll find out soon because major browsers now support this feature. Further, other technologies — like websockets — allow for continuous two-way communication between users and corporate sites. Does this change the calculus? Does it meet the court’s “test”? If so, the court’s exceedingly vague reasoning provides little guidance (and a whole lot of red meat for lawsuits).

To bolster its view that apps are qualitatively different than web sites with regard to their delivery to consumers, the court asks “[w]hy, after all, did Gannett develop and seek to induce downloading of the App?” I don’t know, because… cell phones?

And this bit of “reasoning” does nothing for the court’s opinion, in fact. Gannett undertook development of a web site in the first place because some cross-section of the public was interested in reading news online (and that was certainly the case for any electronic distribution pre-2007). No less, consumers have increasingly been moving toward using mobile devices for their online activities. Though it’s a debatable point, apps can often provide a better user experience than that provided by a mobile browser. Regardless, the line between “app” and “web site” is increasingly a blurry one, especially on mobile devices, and with the proliferation of HTML5 and frameworks like Google’s Progressive Web Apps, the line will only grow more indistinct. That Gannett was seeking to provide the public with an app has nothing to do with whether it intended to develop a more “intimate” relationship with mobile app users than it has with web users.

The 11th Circuit, at least, understands this. In Ellis v. Cartoon Network, it held that a mere user of an app — without more — could not count as a “subscriber” under the VPPA:

The dictionary definitions of the term “subscriber” we have quoted above have a common thread. And that common thread is that “subscription” involves some type of commitment, relationship, or association (financial or otherwise) between a person and an entity. As one district court succinctly put it: “Subscriptions involve some or [most] of the following [factors]: payment, registration, commitment, delivery, [expressed association,] and/or access to restricted content.”

The Eleventh Circuit’s point is crystal clear, and I’m not sure how the First Circuit failed to appreciate it (particularly since it was the district court below in the Yershov case that the Eleventh Circuit was citing). Instead, the court got tied up in asking whether or not a payment was required to constitute a “subscription.” But that’s wrong. What’s needed is some affirmative step – something more than just downloading an app, and certainly something more than merely accessing a web site.

Without that step — a “commitment, relationship, or association (financial or otherwise) between a person and an entity” — the development of technology that simply offers a different mode of interaction between users and content promises to transform the VPPA into a tremendously powerful weapon in the hands of eager attorneys, and a massive threat to the advertising-based business models that have enabled the growth of the web.

How could this possibly not apply to websites?

In fact, there is no way this opinion won’t be picked up by plaintiff’s attorneys in suits against web sites that allow ad networks to collect any information on their users. Web sites may not have access to exact GPS data (for now), but they do have access to fairly accurate location data, cookies, and a host of other data about their users. And with browser-based push notifications and other technologies being developed to create what the court calls a “seamless” experience for users, any user of a web site will count as a “subscriber” under the VPPA. The potential damage to the business models that have funded the growth of the Internet is hard to overstate.

There is hope, however.

Hulu faced a similar challenge over the last few years arising out of its collection of viewer data on its platform and the sharing of that data with third-party ad services in order to provide better targeted and, importantly, more user-relevant marketing. Last year it actually won a summary judgment motion on the basis that it had no way of knowing that Facebook (the third-party with which it was sharing data) would reassemble the data in order to identify particular users and their viewing habits. Nevertheless, Huu has previously lost motions on the subscriber and PII issues.

Hulu has, however, previously raised one issue in its filings on which the district court punted, but that could hold the key to putting these abusive litigations to bed.

The VPPA provides a very narrowly written exception to the prohibition on information sharing when such sharing is “incident to the ordinary course of business” of the “video tape service provider.” “Ordinary course of business” in this context means  “debt collection activities, order fulfillment, request processing, and the transfer of ownership.” In one of its motions, Hulu argued that

the section shows that Congress took into account that providers use third parties in their business operations and “‘allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’

The district court didn’t grant Hulu summary judgment on the issue, essentially passing on the question. But in 2014 the Seventh Circuit reviewed a very similar set of circumstances in Sterk v. Redbox and found that the exception applied. In that case Redbox had a business relationship with Stream, a third party that provided Redbox with automated customer service functions. The Seventh Circuit found that information sharing in such a relationship fell within Redbox’s “ordinary course of business”, and so Redbox was entitled to summary judgment on the VPPA claims against it.

This is essentially the same argument that Hulu was making. Third-party ad networks most certainly provide a service to corporations that serve content over the web. Hulu, Gannett and every other publisher on the web surely could provide their own ad platforms on their own properties. But by doing so they would lose the economic benefits that come from specialization and economies of scale. Thus, working with a third-party ad network pretty clearly replaces the “order fulfillment” and “request processing” functions of a content platform.

The Big Picture

And, stepping back for a moment, it’s important to take in the big picture. The point of the VPPA was to prevent public disclosures that would chill speech or embarrass individuals; the reporter in 1987 set out to expose or embarrass Judge Bork.  This is the situation the VPPA’s drafters had in mind when they wrote the Act. But the VPPA was most emphatically not designed to punish Internet business models — especially of a sort that was largely unknown in 1988 — that serve the interests of consumers.

The 1988 Senate report on the bill, for instance, notes that “[t]he bill permits the disclosure of personally identifiable information under appropriate and clearly defined circumstances. For example… companies may sell mailing lists that do not disclose the actual selections of their customers.”  Moreover, the “[Act] also allows disclosure to permit video tape service providers to use mailing houses, warehouses, computer services, and similar companies for marketing to their customers. These practices are called ‘order fulfillment’ and ‘request processing.’”

Congress plainly contemplated companies being able to monetize their data. And this just as plainly includes the common practice in automated tracking systems on the web today that use customers’ viewing habits to serve them with highly personalized web experiences.

Sites that serve targeted advertising aren’t in the business of embarrassing consumers or abusing their information by revealing it publicly. And, most important, nothing in the VPPA declares that information sharing is prohibited if third party partners could theoretically construct a profile of users. The technology to construct these profiles simply didn’t exist in 1988, and there is nothing in the Act or its legislative history to support the idea that the VPPA should be employed against the content platforms that outsource marketing to ad networks.

What would make sense is to actually try to fit modern practice in with the design and intent of the VPPA. If, for instance, third-party ad networks were using the profiles they created to extort, blackmail, embarrass, or otherwise coerce individuals, the practice certainly falls outside of course of business, and should be actionable.

But as it stands, much like the TCPA, the VPPA threatens to become a costly technological anachronism. Future courts should take the lead of the Eleventh and Seventh circuits, and make the law operate in the way it was actually intended. Gannett still has the opportunity to appeal for an en banc hearing, and after that for cert before the Supreme Court. But the circuit split this presents is the least of our worries. If this issue is not resolved in a way that permits platforms to continue to outsource their marketing efforts as they do today, the effects on innovation could be drastic.

Web platforms — which includes much more than just online newspapers — depend upon targeted ads to support their efforts. This applies to mobile apps as well. The “freemium” model has eclipsed the premium model for apps — a fact that expresses the preferences of both consumers at large as well as producers. Using the VPPA as a hammer to smash these business models will hurt everyone except, of course, for plaintiff’s attorneys.

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

Last March, I published an op ed in the the Washington Times on the proposed VENUE Act, a recently introduced bill taken wholesale from a portion of HR 9 (the tendentiously titled “Innovation Act”).  HR 9 has rightly stalled given its widespread and radical changes to the patent system that weaken and dilute all property rights in innovation.  Although superficially more “narrow” because the VENUE Act contains only the proposed venue rule changes in HR 9, the VENUE Act is just the Son of Frankenstein for the innovation industries.  This bill simply continues the anti-patent owner bias in the DC policy debates that has gone almost completely unchecked since before the start of President Obama’s first term in office.

Here’s a portion of my op ed:

The VENUE Act is the latest proposal in a multi-year campaign by certain companies and interest groups to revise the rules of the patent system. The fundamental problem is that this campaign has created an entirely one-sided narrative about patent “reform”: all the problems are caused by patent owners and thus the solutions require removing the incentives for patent owners to be bad actors in the innovation economy. This narrative is entirely biased against patented innovation, the driver of America’s innovation economy for over two hundred years that has recognized benefits. As a result, it has produced an equally biased policy debate that inexorably leads to the same conclusion in every “reform” proposal arising from this campaign: these vital property rights must be weakened, watered down, or eliminated when it comes to their licensing in the marketplace or enforcement in courts.

….

In this narrower bill to address litigation abuse, for instance, it is an Alice in Wonderland state of affairs to be talking only about stopping abuse of the courts by patent owners while blatantly ignoring the same abuse by challengers of patents in the administrative review programs run by the Patent Trial and Appeals Board (PTAB). It is widely recognized that the PTAB is incredibly biased against patents in both its procedural and substantive rules. The Supreme Court recently agreed to hear just one of many appeals that are currently working their way through the courts that explicitly address these concerns. There is legitimate outcry about hedge fund managers exploiting the PTAB’s bias against patents by filing petitions to invalidate patents after shorting stocks for bio-pharmaceutical companies that own these patents. The PTAB has been called a “death squad” for patents, and with a patent invalidation rate between 79% to 100%, this is not entirely unjustified rhetoric.

The absence of any acknowledgment that reform of the PTAB is just as pressingly important as venue reform by those pushing for the VENUE Act is a massive elephant in the room. Unfortunately, it is unsurprising. But this is only because it is the latest example of a strikingly one-sided, biased narrative of the past several years about patent “reform.”

As bloggers like to say: Read the whole thing here.

UPDATE: A more in-depth, legal analysis of proposed “venue reform” and the resulting collateral damage it imposes on all patent owners is provided by Devlin Hartline in his essay, “Changes to Patent Venue Rules Risk Collateral to Innovators,” which can be read here.

Last week, the Campaign for Sustainable Rx Pricing (CSRxP)—whose membership includes health insurance companies and other health payors, health providers, and consumers—proposed various reforms aimed at addressing the high costs of prescription drugs. CSRxP declares that their proposals will improve the functioning of the pharmaceutical market by increasing pricing transparency, promoting competition, and enhancing value. Although there are some good ideas in their list of proposals, others will negatively affect the pharmaceutical market, and ultimately, consumers.

The first set of proposals is aimed at increasing transparency in drug pricing.  I’ve previously commented on the likely negative effects of transparency reforms: they impose extensive legal and regulatory costs on businesses and risk harming competition if competitively-sensitive information gets into the wrong hands. CSRxP proposes that manufacturers disclose the price they intend to charge for a drug as part of the FDA approval process and, after approval, report price changes to the Department of Health and Human Services (HHS). Requiring manufacturers to report expected pricing as a condition of FDA approval suggests that the FDA’s role in assessing the risks and efficacy of drugs will merge with a central planner’s job of determining how products should be priced in the market. Will a drug not be approved if the price is too high? Shouldn’t consumers and payors, not a government agency, determine the market demand for a drug? And what will HHS do with the price change information—just condemn the “blameworthy” manufacturers or institute some sort of price control with its ensuing harms?

The second set of proposals purports to promote competition in the market for drugs. Many of these proposals are good ideas, and will help bring more and cheaper drugs to the market.  However, policy makers should tread carefully with other proposals, such as a call to prohibit product-hopping, because an overeager adoption or imprecise application of these reforms could curb pharmaceutical innovation and worsen patient health outcomes.  Lawmakers must ensure that adopted reforms balance incentives to innovate with the fostering of competition and lower prices.

The third set of proposals target the so-called “value” of drugs. Here, CSRxP proposes that manufacturers perform comparison studies to demonstrate that their drug is superior to existing drugs. While, in theory, knowing the relative effectiveness of drugs sounds great, there are two critical problems with this approach. First, are we really going to require more testing by drug manufacturers? It is estimated that testing and development costs already reach an average of $2.6 billion for each new drug brought to market; this is one of the explanations for the already high price of drugs. Why would we want to add more expensive testing? Also, I’m skeptical that comparison studies can offer the necessary insight into what drug works best for an individual patient. Drugs that perform extremely well for a small group of people may appear to have only average effectiveness in aggregate studies. And we certainly don’t want the expense of separate comparison studies on countless small groups of patients.

CSRxP also proposes that the government adopt value-based purchasing (VBP) arrangements that link payment for medications to patient outcomes and cost-effectiveness rather than just the quantity of treatments. Although CSRxP doesn’t detail the specific form of VBP they prefer, some of the possibilities could produce harmful consequences. Namely, VBP arrangements that set a standard payment rate for a group of similar drug products, such as reference pricing, will effectively act like a price control because the only way certain drugs will be available is if drug companies agree to offer them at that set rate. Price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage, drug shortages in certain markets, and reduced incentives for innovation.

In sum, while CSRxP has some good ideas in their list, many of the proposals would ultimately harm the very patients the proposals are designed to benefit. Policymakers should steer clear of any reform that could act as a direct or indirect price control, increase the already high costs of developing drugs, or reduce incentives for innovation.

On April 15, President Obama issued Executive Order 13725, “Steps to Increase Competition and Better Inform Consumers and Workers to Support Continued Growth of the American Economy” (“the Order”).  At first blush, the Order appears quite promising.  It commendably (1) praises competitive markets as a cornerstone of the American economy, and (2) sets the promotion of competitive markets as “a shared priority across the Federal Government.”  The Order then directs executive branch departments and agencies (“agencies”) with “authorities that could be used to enhance competition” to “eliminate regulations that restrict competition without corresponding benefits to the American public.”  Furthermore, agencies are to identify ways they “can promote competition through pro-competitive rulemaking and regulations” and  “by eliminating regulations that restrict competition without corresponding benefits to the American public.”  What’s more, within sixty days agencies shall report to the White House:

“[R]ecommendations on agency-specific actions that eliminate barriers to competition, promote greater competition, and improve consumer access to information needed to make informed purchasing decisions.  Such recommendations shall include a list of priority actions, including rulemakings, as well as timelines for completing those actions. . . .  Subsequently, agencies shall report semi-annually to the President . . . on additional actions that they plan to undertake to promote greater competition.”

Finally, the Order praises the value of federal antitrust enforcement, and directs agencies to cooperate with the two federal antitrust enforcers, the U.S. Federal Trade Commission and the U.S. Department of Justice.

While a presidential nod to the importance of competition and the benefits of procompetitive regulatory reform is always welcome, I fear that the Order is little more than cheap symbolism and is not intended to have real effect.  (I hope, of course, that I am wrong about this.)  Similarly, technology policy writer and fellow Truth on the Market blogger Kristian Stout has opined that “there is nothing in the Order . . . to provide any confidence that competition will, in fact, be promoted.”  This pessimism, unfortunately, is warranted.  It stems from the Obama Administration’s sad history of pursuing policies that are antithetical to procompetitive regulatory reform.

In an April 19 commentary on the Order, Susan Dudley, Director of George Washington University’s Regulatory Studies Center, pointed out that the Council of Economic Advisers Issues Brief accompanying the Order (“Brief”) made no reference to the bipartisan deregulatory successes of the 1970s and 1980s, which featured the elimination of certain agencies and the “removal of unnecessary regulation in several previously-regulated industries, with resulting improvements in innovation and consumer welfare.”  Moreover, as Dudley further explained, the Obama Administration’s longstanding anticompetitive and pro-regulatory policies fly in the face of the procompetitive regulatory reform goals that inform the Order and the Brief:

Recent years have seen a resurgence of economic regulation, which may be contributing to the decline in competition and innovation that the issue brief decries.  Regulations under the Affordable Care Act and Dodd-Frank Act, for example, limit prices, control entry, and constrain service quality.  The flurry of standards mandating the energy-efficiency of appliances and fuel-economy of vehicles restricts consumer choices.  And, many would argue that Federal Communications Commission’s net neutrality rules and the Department of Labor’s fiduciary rules—two areas that . . . [the Brief] highlight[s] as illustrating the “pro-competition progress” on which the executive order will build—are indeed anticompetitive, limiting the arrangements that could emerge from competitive markets, and potentially harming innovation.

The ever-increasing size and scope of the economic harm imposed by the Obama Administration’s regulatory programs, alluded to by Dudley, has been documented in “Red Tape Rising,” an annual report produced by Heritage Foundation scholars James L. Gattuso and Diane Katz.  The 2015 Red Tape Rising report (the 2016 version will be released later this spring) reported these sobering findings (footnotes omitted):

The number and cost of government regulations continued to climb in 2014, intensifying Washington’s control over the economy and Americans’ lives.  The addition of 27 new major rules last year pushed the tally for the Obama Administration’s first six years to 184, with scores of other rules in the pipeline.  The cost of just these 184 rules is estimated by regulators to be nearly $80 billion annually, although the actual cost of this massive expansion of the administrative state is obscured by the large number of rules for which costs have not been fully quantified.  Absent substantial reform, economic growth and individual freedom will continue to suffer. . . .  Many more regulations are on the way, with another 126 economically significant rules on the Administration’s agenda, such as directives to farmers for growing and harvesting fruits and vegetables; strict limits on credit access for service members; and, yet another redesign of light bulbs.

To combat this regulatory morass, the 2015 Red Tape Rising study made these recommendations:

Immediate reforms should include requiring legislation to undergo an analysis of regulatory impacts before a floor vote in Congress, and requiring every major regulation to obtain congressional approval before taking effect. Sunset deadlines should be set in law for all major rules, and independent agencies should be subject—as are executive branch agencies—to the White House regulatory review process.

If the Obama Administration is truly serious about procompetitive regulatory reform, and wants to confound the skeptics, it should endorse the Red Tape Rising recommendations as follow-on steps taken in light of the Order.  Also, the Administration should take additional specific helpful actions, including, for example:  (1) requiring that the agency regulatory reform recommendations called for by the Order be evaluated by the Office of Management and Budget’s expert regulatory review arm, the Office of Information and Regulatory Affairs (OIRA); (2) publicly committing to rooting out of anticompetitive and non-cost-beneficial regulations, on the basis of OIRA reviews of agency recommendations; and (2) preparing a discrete legislative package of targeted statutory reforms to diminish the burden of federal regulation, which could be taken up by the next Administration.  Simultaneously, the White House could recant its prior public support for over-regulatory initiatives taken by specific agencies, such as its endorsement of anti-innovation Federal Communications Commission “net neutrality” (see a scholarly critique here) and set-top box (see my critical commentary here) rules.  By acting in this manner, the Obama Administration would demonstrate its commitment to the spirit of the Order, and, thus, to the promotion of a more vibrant and efficient American economy.

In order to move in the direction I recommend, the Administration would have to reject the notion that market competition can somehow be micromanaged and improved upon by enactment of enlightened “pro-competitive” regulatory guidance.  This notion, which was articulated by Oscar Lange among many others (see, for example, Lange’s “On the Economic Theory of Socialism,” here and here), presumes in the extreme that government bureaucrats are able to set optimal economy-wide rules and prices that generate economic efficiency.  Friedrich Hayek effectively refuted this notion as a matter of theory (see, for example, Hayek’s “The Use of Knowledge in Society”), and nearly a century of failed socialist experiments have refuted it as a matter of empirical fact.  What’s more, even “limited” issue-specific government regulation has too often reduced economic welfare and efficiency, as predicted by public choice theory (see, for example, here).  Perhaps some wise senior official will take these teachings to heart and convince the Obama White House to apply them henceforth – but I am not holding my breath.