In a Heritage Foundation paper released today, I argue that U.S. antidumping law should be reformed to incorporate principles drawn from the antitrust analysis of predatory pricing.  A brief summary of my paper follows.  Such a change would transform antidumping law from a special interest cronyist tool that harms U.S. consumers into a sensible procompetitive provision.

Imports and Dumping

Imported goods and services provide great benefits to the American economy and to American consumers.  Imports contribute to U.S. job creation on a large scale, provide key components incorporated by U.S. manufacturers into their products, and substantially raise the purchasing power of American consumers.

Despite the benefits of imports, well-organized domestic industries have long sought to protect themselves from import competition by convincing governments to impose import restrictions that raise the costs of imported goods and thus reduce the demand for imports.  One of the best known types of import restrictions (one that is allowed under international trade agreements and employed by many other countries as well) is an “antidumping duty,” a special tariff assessed on imported goods that allegedly are set at “unfairly lower” rates than the prices for the same products sold in their domestic market.

Product-specific U.S. antidumping investigations are undertaken by the U.S. Department of Commerce (DOC) and the U.S. International Trade Commission (USITC, an independent federal agency), in response to a petition from a U.S. producer, a group of U.S. producers, or a U.S. labor union.  The DOC determines if dumping has occurred and calculates the “dumping margin” (the difference between a “fair” and an “unfair” price) for the setting of antidumping tariffs.  The USITC decides whether a domestic industry has been “materially injured” by dumping.  If the USITC finds material injury, the DOC publishes an antidumping order, which requires importers of the investigated merchandise to post a cash deposit equal to the estimated dumping duty margins.

Economists define dumping as international “price discrimination”— the charging of lower prices (net of selling expenses and transportation) in a foreign market than in a domestic market for the same product.  Despite its bad-sounding label, price discrimination, whether foreign or domestic, is typically a perfectly legitimate profitable business practice that benefits many consumers.  Price discrimination allows a producer to sell to additional numbers of price-sensitive consumers in the low-priced market, to their benefit:  Those consumers would have bought nothing at all if faced with a uniformly applied higher price.

Dumping harms domestic consumers and the overall economy only when the foreign seller successfully drives domestic producers out of business by charging an overly low “predatory” (below its cost) import price, monopolizes the domestic market, and then raises import prices to monopoly levels, thereby recouping any earlier losses.  In such a situation, domestic consumers pay higher prices over time due to the domestic monopoly, and domestic producers that exited the market due to predation suffer welfare losses as well.

The Problem with Current U.S. Antidumping Law

Although antidumping law originally was aimed at counteracting such predation, antidumping provisions long ago were reformulated to raise the likelihood that dumping would be found in matters under investigation.  In particular, 1974 legislation eliminated consideration of sales made below full production cost in the home market and promoted the use of “constructed value” calculations for home-market sales that included approximations for the cost of production, selling, general and administrative expenses, and an amount for profit.  This methodology, compared to the traditional approach of comparing actual net foreign product prices with net U.S. prices, tended to favor domestic producers by yielding higher margins of dumping.

The favoring of domestic industries continued with the Trade Tariff Act of 1984, which compelled the USITC to use a “cumulation” analysis that could subject multiple countries to anti-dumping penalties if one county’s product was found to cause material injury to the establishment of a domestic industry.  More specifically, under cumulation, if multiple countries are being investigated for dumping the same particular product and if exports from any one of those countries, or all in combination, are found to cause material injury, then all exports are made subject to an antidumping order.  Thus, imports from individual countries that individually could not be shown to cause material injury face a price increase — an anti–American consumer outcome that lacks any legitimate rationale.

These and other developments have further encouraged American industries to invoke antidumping as a protectionist mechanism.  Thus, it is not surprising that in recent decades, there has been a significant increase in the number of U.S. antidumping cases filed and the number of affirmative injury findings.  Also noteworthy is the proliferation of foreign antidumping laws since 1980, which harms American exporters. Overall, the economic impact of antidumping law on the American economy has grown substantially.  In short, antidumping is a cronyist special interest law that harms American consumers.

Moreover, even taking into account domestic industrial interests, prohibiting dumping likely would not have a positive effect on domestic industry as a whole.  Antidumping restrictions on imported raw materials and industrial products used by U.S. firms make it difficult for these firms to compete internationally.  In fact, the USITC is statutorily barred from considering their impact on consuming industries.  These consuming industries are often a larger part of the U.S. economy than the industries benefitting from antidumping regulation, and producers of upstream products have become reliant on restricting customer access to foreign goods rather than better responding to their customers’ needs.

Furthermore, antidumping harms the U.S. economy by reducing American firms’ incentive to produce more efficiently.  Non-predatory dumping spurs domestic firms to produce more efficiently (at lower costs) so that they can reduce prices and compete with imports in order to remain in the market.  Finally, the existence of antidumping law may encourage implicit collusion among domestic firms and foreign firms to soften price competition.  The truth is that when domestic industries complain that non-predatory dumping is “unfair,” they are really objecting to competition on the merits — competition that raises overall long-term American economic welfare.

A New Antitrust-Based Predatory Pricing Test for Dumping

In sum, aggressive price competition by foreign producers benefits American consumers, enhances economic efficiency, and promotes competitive vigor — net benefits to the American economy.  Only below-cost “predatory dumping” by a foreign monopolist that allows it to drive out American producers and then charge monopoly prices to American consumers should be a source of U.S. policy concern and legal prohibition.

A test that would prohibit only harmful predatory dumping can be drawn directly from a standard developed by U.S. courts and scholars for determining illegal price predation under American antitrust law.  Applying that test in antidumping cases, antidumping tariffs would be imposed only when two conditions were satisfied.

First, the government would have to determine that the imports under scrutiny were priced at a below-cost level that caused the foreign producer to incur losses on the production and sale of those imports.  This would be a price below “average avoidable cost,” which would include all the costs that a firm could have avoided incurring by not producing the allegedly dumped products.

Second, if it met the first test, the government would have to show that the firm allegedly doing the dumping would be likely to “recoup” — that is, charge high monopoly prices for future imports that more than make up for its current losses on below cost imports.

This proposed new antidumping methodology would be administrable.  Indeed, because it focuses narrowly and solely on certain readily ascertainable costs and data on domestic industry viability, it should be easier (and thus less costly) to apply than the broad and uncertain methodologies under current law.

Of perhaps greater significance, it could serve as a sign that the U.S. government favors competition on the merits and rejects special-interest cronyism — a message that could prove valuable in international negotiations aimed at having other nations’ antidumping regimes adopt a similar approach.  To the extent that other jurisdictions adopted reforms that emulated the new American approach, U.S. exporters would benefit from reduced barriers to trade, a further boon to the U.S. economy.

Conclusion

U.S. antidumping law should be reformed so that it is subject to a predatory pricing test drawn from American antitrust law.  Application of such a standard would strengthen the American economy and benefit U.S. consumers while precluding any truly predatory dumping designed to destroy domestic industries and monopolize American industrial sectors.

The Heritage Foundation continues to do path-breaking work on the burden overregulation imposes on the American economy, and to promote comprehensive reform measures to reduce regulatory costs.  Overregulation, unfortunately, is a global problem, and one that is related to the problem of anticompetitive market distortions (ACMDs) – government-supported cronyist restrictions that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  Shanker Singham and I have written about the importance of estimating the effects of and tackling ACMDs if international trade liberalization measures are to be successful in promoting economic growth and efficiency.

The key role of tackling ACMDs in spurring economic growth is highlighted by the highly publicized Greek economic crisis.  The Heritage Foundation recently assessed the issues of fiscal profligacy and over-taxation that need to be addressed by Greece.  While those issues are of central importance, Greece will not be able to fulfill its economic potential without also undertaking substantial regulatory reforms and eliminating ACMDs.  In that regard, a 2014 OECD report on competition-distorting rules and provisions in Greece, concluded that the elimination of barriers to competition would lead to increased productivity, stronger economic growth, and job creation.  That report, which focused on regulatory restrictions in just four sectors of the Greek economy (food processing, retail trade, building materials, and tourism), made 329 specific recommendations to mitigate harm to competition.  It estimated that the benefit to the Greek economy of implementing those reforms would be around EUR 5.2 billion – the equivalent of 2.5% of GDP –  due to increased purchasing power for consumers and efficiency gains for companies.  It also stressed that implementing those recommendations would have an even wider impact over time. Extended to all other sectors of the Greek economy (which are also plagued by overregulation and competitive distortions), the welfare gains from Greek regulatory reforms would be far larger.  The OECD’s Competition Assessment Toolkit provides a useful framework that Greece and other reform-minded nations could use to identify harmful regulatory restrictions.

Unfortunately, in Greece and elsewhere, merely identifying the sources of bad regulation is not enough – political will is needed to actually dismantle harmful regulatory barriers and cronyist rules.  As Shanker Singham pointed out yesterday in commenting on the prospects for Greek regulatory reform, “[t]here is enormous wealth locked away in the Greek economy, just as there is in every country, but distortions destroy it.  The Greek competition agency has done excellent work in promoting a more competitive market, but its political masters merely pay lip service to the concept. . . .  The Greeks have offered promises of reform, but very little acceptance of the major structural changes that are needed.”  The United States is not immune to this problem – consider the case of the Export-Import Bank, whose inefficient credit distortionary policies proved impervious to reform, as the Heritage Foundation explained.

What, then, can be done to reduce the burden of overregulation and ACMDs, in Greece, the United States, and other countries?  Consistent with Justice Louis Brandeis’s observation that “sunshine is the best disinfectant,” shining a public spotlight on the problem can, over time, help build public support for dismantling or reforming welfare-inimical restrictions.  In that regard, the Heritage Foundation’s Index of Economic Freedom takes into account “regulatory efficiency,” and, in particular, “the overall burden of regulation as well as the efficiency of government in the regulatory process”, in producing annual ordinal rankings of every nations’ degree of economic freedom.  Public concern has to translate into action to be effective, of course, and thus the Heritage Foundation has promulgated a list of legislative reforms that could help rein in federal regulatory excesses.  Although there is no “silver bullet,” the Heritage Foundation will continue to publicize regulatory overreach and ACMDs, and propose practical solutions to dismantle these harmful distortions.  This is a long-term fight (incentives for government to overregulate and engage in cronyism are not easily curbed), but well worth the candle.

Patent reform legislation is under serious consideration by the Senate and House of Representatives, a mere four years after the America Invents Act of 2011 (AIA) brought about a major overhaul of United States patent law. A primary goal of current legislative efforts is the reining in of “patent trolls” (also called “patent assertion entities”), that is, firms that purchase others’ patents for the sole purpose of threatening third parties with costly lawsuits if they fail to pay high patent license fees. A related concern is that many patents acquired by trolls are “poor quality,” and that parties approached by trolls too often are induced to “pay up” without regard to the underlying merits of the matter.
In a Heritage Foundation paper released today (see http://www.heritage.org/research/reports/2015/07/a-measured-approach-to-patent-reform-legislation), John Malcolm and I briefly review developments since the AIA’s enactment, comment on the patent troll issue, and provide our perspective on certain categories of patent law changes now being contemplated.
Addressing recent developments, we note that the Supreme Court of the United States has issued a number of major decisions over the past decade (five in its 2013–2014 term alone) that are aimed at tightening the qualifications for obtaining patents and enhancing incentives to bring legitimate challenges to questionable patents. Although there is no single judicial silver bullet, there is good reason to believe that, taken as a whole, these decisions will significantly enhance efforts to improve patent quality and to weed out bad patents and frivolous lawsuits.
With respect to patent trolls, we explain that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.
We emphasize that Congress should exercise caution in addressing patent litigation reforms. Despite its imperfections, the U.S. patent law system unquestionably has been associated with spectacular innovation in a wide variety of fields, ranging from smartphones to pharmaceuticals. Thus, in deciding what statutory fixes are appropriate to rein in patent litigation abuses, Congress should seek to minimize the risk that changes in the law will have the unintended consequence of weakening patent rights, thereby undermining American innovation.
We then turn to assess proposals dealing with heightened patent pleading requirements; greater patent transparency; case management and discovery limits; stays of suits against customers; the award of attorneys’ fees and costs to the prevailing party; joinder of third parties; reining in abusive demand letters; post-grant administrative patent review reforms; and minor miscellaneous reforms. We conclude that many of these reforms appear to have significant merit and could prove useful in reducing the costs of the patent litigation system. Nevertheless, there is a serious concern that certain reform proposals would make it more difficult for holders of legitimate patents to vindicate their rights. In addition, as is the case with all new legislation, there is the risk that novel legislative language might have unintended consequences, including the effects of future court decisions construing the newly-adopted language.
Accordingly, before deciding to take action, we believe that Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.
In sum, careful, sober, detailed assessment is warranted to ensure that further large-scale changes in U.S. patent law advance the goal of improving the U.S. patent system as a whole, with due attention to the rights of inventors and the socially beneficial innovations that they generate.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

On Thursday I will be participating in an ABA panel discussion on the Apple e-books case, along with Mark Ryan (former DOJ attorney) and Fiona Scott-Morton (former DOJ economist), both of whom were key members of the DOJ team that brought the case. Details are below. Judging from the prep call, it should be a spirited discussion!

Readers looking for background on the case (as well as my own views — decidedly in opposition to those of the DOJ) can find my previous commentary on the case and some of the issues involved here:

Other TOTM authors have also weighed in. See, e.g.:

DETAILS:

ABA Section of Antitrust Law

Federal Civil abaantitrustEnforcement Committee, Joint Conduct, Unilateral Conduct, and Media & Tech Committees Present:

“The 2d Cir.’s Apple E-Books decision: Debating the merits and the meaning”

July 16, 2015
12:00 noon to 1:30 pm Eastern / 9:00 am to 10:30 am Pacific

On June 30, the Second Circuit affirmed DOJ’s trial victory over Apple in the Ebooks Case. The three-judge panel fractured in an interesting way: two judges affirmed the finding that Apple’s role in a “hub and spokes” conspiracy was unlawful per se; one judge also would have found a rule-of-reason violation; and the dissent — stating Apple had a “vertical” position and was challenging the leading seller’s “monopoly” — would have found no liability at all. What is the reasoning and precedent of the decision? Is “marketplace vigilantism” (the concurring judge’s phrase) ever justified? Our panel — which includes the former DOJ head of litigation involved in the case — will debate the issues.

Moderator

  • Ken Ewing, Steptoe & Johnson LLP

Panelists

  • Geoff Manne, International Center for Law & Economics
  • Fiona Scott Morton, Yale School of Management
  • Mark Ryan, Mayer Brown LLP

Register HERE

Uber is currently facing a set of plaintiffs who are seeking class certification in the Northern District of California (O’Connor, et. al v. Uber, #CV 13-3826-EMC) on two distinct grounds. First, the plaintiffs allege that Uber systematically deprived them of tips from riders by virtue of how the service is presented to end-users and how compensation is given to the riders in violation of the California Unfair Competition Law, Cal. Bus. & Prof. Code § 17200 et seq. Second, the plaintiffs claim that Uber misclassified its drivers – all 160,000 of them in California over the last five years – by failing to give them the legal definition of “employee” and, following from this, deprived said “employees” of reimbursement for things like mileage, gas, and other wear-and-tear on their vehicles (not to mention the shadow of entitlements like benefits and worker’s comp).

Essentially, claim one is based on the notion that Uber informs passengers that gratuity is included in the total cost of the car service and that there is no need to tip the driver. However, according to the plaintiffs, Uber either failed to collect this gratuity, or by failing to differentiate between the gratuity and the fee for the ride, and then collecting its own 20% cut of the total fee, the company improperly retained some of the gratuity for itself. In truth, it’s not completely clear from the complaint exactly how the plaintiffs are calculating allegedly withheld tips. Uber does a good job in its motion to defeat certification of pointing out, on the one hand, that there is no such thing as a “standard tip,” and, on the other hand, that the assessment of the tip issue would require so much individualized examination — from figuring out whether drivers were told that they could be tipped or not, to figuring out if drivers actually were consistently tipped — that the common issues proper to class examination would be overwhelmed.

The real meat of this case, however, and the issue with the most effect on both Uber’s bottom line as well as on the future of sharing platforms generally, is whether the drivers should be classified as employees or not.

Uber’s motion to defeat certification is, logically enough, based on attacking the commonality and typicality requirements of Rule 23. The main thrust of Uber’s motion is that not only would the four named plaintiffs be inappropriate to represent the 160,000 member class of allegedly harmed drivers, but also no such plaintiffs could represent such a class as the relationship between Uber and its drivers is so diverse that no common questions or issues would control the proceeding. In support of its position, Uber introduced the sworn declarations of over 400 Uber drivers from California, each detailing a unique situation that would either make them not in line with the harms alleged by the named plaintiffs, or squarely opposed to them.

Further, there were seventeen different contracts involved in the relationship between Uber and the 160,000 drivers swept up into the suit, which would make identifying common questions exceedingly difficult. Even terms that are common across agreements, Uber claims, would have enough distinction between them to make class certification impossible. For instance, Uber cited numerous examples from its different agreements where tipping was permitted, and others where it was not mentioned at all. Similarly, Uber cited examples where the right to terminate rested solely with Uber, and others where the right to terminate was by mutual consent between Uber and the driver.  Further, Uber claims that the employment test from Borello (the case that governs employee classification in California) requires a fact-based examination of each driver’s particular circumstances owing to the wide variation in contract terms — further making class certification inappropriate.

Uber’s arguments are all sound, and I sincerely hope that it defeats the class certification. But the case itself represents an ongoing and persistent problem for Uber and sharing economy platforms across the United States (and the world, really). The core of that problem is simply this: are you an employee or a contractor? A heading from Uber’s motion stands out to me as emblematic of this problem:

The Named Plaintiffs Are Not Typical Of the Putative Class Because There Is No Typical Uber Driver

There is no typical Uber driver because Uber is just a platform, the definitions of our antiquated legal system notwithstanding. The real value proposition of sharing platforms is that they enable normal folks — that is, people outside of a typically defined industry — to take part in an industry that was previously dominated by firms (and replete with considerable barriers to entry). As the Northern District of California observes in Cotter v. Lyft, trying to fit a sharing economy worker of today into yesterday’s notion of “employees” and “contractors” is akin to “be[ing] handed a square peg and asked to choose between two round holes.” In the same passage, that court observed that “[t]he test the California courts have developed over the 20th Century for classifying workers isn’t very helpful in addressing this 21st Century problem.”

Indeed.

The claims of the plaintiffs in the Uber class action notwithstanding, there is nothing inherently “employee”-like about an Uber driver, and there are plenty of opportunities for sharing economy workers to not be quite so “contractor”-like either.  What we really need is some creative thinking, and an application of legal principles (as opposed to tired categories) to the new reality of the 21st century in order to come up with a third way (and maybe a fourth and fifth way, as well…) of regulating labor relationships. If we must have classes, consider it the entrepreneurial class.

Uber’s business model is a great example of how an employee definition doesn’t quite make sense. The party that contracts with Uber might not even be an individual, but a corporation that, even without Uber’s platform, would be providing private ride services. Particularly with UberBlack, private companies use Uber’s lead generation platform merely to supplement their own marketing efforts. Obviously converting these companies and their own employees into “employees” of Uber is ludicrous.

However, even for the more common example that many people will first think of — the guy down the street with a car and some time on his hands — sticking him into the employee category may or may not make sense. First, as an employee he will be handed a whole raft of potential benefits that have corresponding obligations for Uber. Those obligations — like disability, health benefits, time off, etc — will come at a cost, which will typically mean less money earned for that sometimes-driver as those costs are passed on in the form of either increased prices (and a reduction in ridership) or reduced wages. For many people, this will decrease their marginal earnings to the point where it won’t make sense for them to drive anymore.

Second, for many people it may lead to an outright conflict that either prevents them from being a driver, or else locks them into a single platform, thus harming competition in the marketplace. A driver who is Uber’s “employee” may be in violation of her duties of loyalty to Uber if she takes rides from the Lyft platform (and multi-homing is extremely common in this space). Similarly, employers – in particular state and municipal governments – frequently have strict rules on outside employment, and a determination that driving for Uber makes you an “employee” of the company may effectively preclude drivers by virtue of their actual employer’s policies.

Further, I believe it’s notable that many employment tests in the United States are extremely multi-factor; the Borello case from CA outlines thirteen distinct considerations, for instance.  The utter complexity of fitting a worker into an “employee” classification suggests that even this old, familiar notion of what it is to be an “employee” is not quite as clear as we often presume, but is more of a “catch-all” category. The sharing-economy platforms from companies like Uber and Lyft will only exacerbate this problem — and serve to make its problematic consequences more pointed.

But even the definition of “contractor” is inapplicable to these drivers. In the case at hand, Uber was accused of treating drivers as employees because it provided suggestions about how to earn higher ratings from riders, and because it offered “on-boarding” programs that give new drivers an orientation. This general training is not a need unique to Uber, however. Consider Instacart’s recent announcement that it would re-classify some of its employees in Boston as part-time workers. In large part, it seems clearly to be the case that the company decided to make this move for purely strategic, legal reasons. In actuality, it wanted simply to be able to guarantee that there would be some minimum level of quality for the people who provided services through its network. This might involve orientation meetings, intermittent trainings, and some minor direction on how a shopper should perform his or her work (for instance, pick produce last so that it remains fresh). There is no obvious reason why providing this sort of guidance should force a company to destroy all of the unique and socially beneficial qualities of its offerings by being forced into classifying on-demand workers as “employees.”

The sharing economy promises to remove the transaction costs that have for quite a long time chained employees to firms. On their own, individuals simply cannot obtain enough information that would enable them to realize a fully self-defined work environment. It’s an accident of history (and technology) — of scarce resources and scarcer information — that the model of work has revolved around selling one’s services to an employer. But technology is now rendering this model inefficient compared to the alternatives — and our legal system should not get in its way. Canadian courts have begun experimenting with a third classification of worker — the “dependent” worker, a classification that may or may not work here — and so too should our courts and legislatures start thinking about a new classification. It makes no sense to drag down cutting-edge 21st century work and life models with depression-era notions of what it means to earn a living.

In its June 30 decision in United States v. Apple Inc., a three-judge Second Circuit panel departed from sound antitrust reasoning in holding that Apple’s e-book distribution agreement with various publishers was illegal per se. Judge Dennis Jacobs’ thoughtful dissent, which substantially informs the following discussion of this case, is worth a close read.

In 2009, Apple sought to enter the retail market for e-books, as it prepared to launch its first iPad tablet. Apple, however, confronted an e-book monopolist, Amazon (possessor of a 90 percent e-book market share), that was effectively excluding new entrants by offering bestsellers at a loss through its popular Kindle device ($9.99, a price below what Amazon was paying publishers for the e-book book rights). In order to effectively enter the market without incurring a loss itself (by meeting Amazon’s price) or impairing its brand (by charging more than Amazon), Apple approached publishers that dealt with Amazon and offered itself as a competing e-book buyer, subject to the publishers agreeing to a new distribution model that would lower barriers to entry into retail e-book sales. The new publishing model was implemented by three sets of contract terms Apple asked the publishers to accept – agency pricing, tiered price caps, and a most-favored-nation (MFN) clause. (I refer the reader to the full panel majority opinion for a detailed discussion of these clauses.) None of those terms, standing alone, is illegal. Although the publishers were unhappy about Amazon’s below-cost pricing for e-books, no one publisher alone could counter Amazon. Five of the six largest U.S. publishers (Hachette, HarperCollins, Macmillan, Penguin, and Simon & Schuster) agreed to Apple’s terms and jointly convinced Amazon to adopt agency pricing. Apple also encouraged other publishers to implement agency pricing in their contracts with other retailers. The barrier to entry thus removed, Apple entered the retail market as a formidable competitor. Amazon’s retail e-book market share fell, and today stands at 60 percent.

The U.S. Department of Justice (DOJ) and 31 states sued Apple and the five publishers for conspiring in unreasonable restraint of trade under Sherman Act § 1. The publishers settled (signing consent decrees which prohibited them for a period from restricting e-book retailers’ ability to set prices), but Apple proceeded to a bench trial. A federal district court held that Apple’s conduct as a vertical enabler of a horizontal price conspiracy among the publishers was a per se violation of § 1, and that (in any event) Apple’s conduct would also violate § 1 under the antitrust rule of reason.   A majority of the Second Circuit panel affirmed on the ground of per se liability, without having to reach the rule of reason question.

Judge Jacobs’ dissent argued that Apple’s conduct was not per se illegal and also passed muster under the rule of reason. He pointed to three major errors in the majority’s opinion. First, the holding that the vertical enabler of a horizontal price fixing is in per se violation of the antitrust laws conflicts with the Supreme Court’s teaching (in overturning the per se prohibition on resale price maintenance) that a vertical agreement designed to facilitate a horizontal cartel “would need to be held unlawful under the rule of reason.” Leegin Creative Leather Prods, Inc. v. PSKS, Inc. 551 U.S. 877, 893 (2007) (emphasis added).   Second, the district court failed to recognize that Apple’s role as a vertical player differentiated it from the publishers – it should have considered Apple as a competitor on the distinct horizontal plane of retailers, where Apple competed with Amazon (and with smaller player such as Barnes & Noble). Third, assessed under the rule of reason, Apple’s conduct was “overwhelmingly” procompetitive; Apple was a major potential competitor in a market dominated by a 90 percent monopoly, and was “justifiably unwilling” to enter a market on terms that would assure a loss on sales or exact a toll on its reputation.

Judge Jacobs’ analysis is on point. The Supreme Court’s wise reluctance to condemn any purely vertical contractual restraint under the per se rule reflects a sound understanding that vertical restraints have almost always been found to be procompetitive or competitively neutral. Indeed, vertical agreements that are designed to facilitate entry into an important market dominated by one firm, such as the ones at issue in the Apple case, are especially bad candidates for summary condemnation. Thus, the majority’s decision to apply the per se rule to Apple’s contracts appears particularly out of touch with both scholarship and marketplace realities.

More generally, as Professor Herbert Hovenkamp (the author of the leading antitrust treatise) and other scholars have emphasized, well-grounded antitrust analysis involves a certain amount of preliminary evaluation of a restraint seen in its relevant factual context, before a “per se” or “rule of reason” label is applied. (In the case of truly “naked” secret hard core cartels, which DOJ prosecutes under criminal law, the per se label may be applied immediately.) The Apple panel majority panel botched this analytic step, in failing to even consider that Apple’s restraints could enhance retail competition with Amazon.

The panel majority also appeared overly fixated on the fact that some near-term e-book retail prices rose above Amazon’s previous below cost levels in the wake of Apple’s contracts, without noting the longer term positive implications for the competitive process of new e-book entry. Below-cost prices are not a feature of durable efficient competition, and in this case may well have been a temporary measure aimed at discouraging entry. In any event, what counts in measuring consumer welfare is not short term price, but whether expanded output is being promoted by a business arrangement – a key factor that the majority notably failed to address. (It appears highly probable that the fall in Amazon’s e-book retail market share, and the invigoration of e-book competition, have generated output and welfare levels higher than those that would have prevailed had Amazon maintained its monopoly. This is bolstered by Apple’s showing, which the majority does not deny, that in the two years following the “conspiracy” among Apple and the publishers, prices across the e-book market as a whole fell slightly and total output increased.)

Finally, Judge Jacobs’ dissent provides strong arguments in favor of upholding Apple’s conduct under the rule of reason. As the dissent stresses, removal of barriers to entry that shield a monopolist, as in this case, is in line with the procompetitive goals of antitrust law. Another procompetitive effect is the encouragement of innovation (manifested by the enablement of e-book reading with the cutting-edge functions of the iPad), a hallmark and benefit of competition. Another benefit was that the elimination of below-cost pricing helped raise authors’ royalties. Furthermore, in the words of the dissent, any welfare reductions due to Apple’s vertical restrictions are “no more than a slight offset to the competitive benefits that now pervade the relevant market.” (Admittedly that comment is a speculative observation, but in my view very likely a well-founded one.) Finally, as the dissent points out, the district court’s findings demonstrate that Apple could not have entered and competed effectively using other strategies, such as wholesale contracts involving below-cost pricing (like Amazon’s) or higher prices. Summing things up, the dissent explains that “Apple took steps to compete with a monopolist and open the market to more entrants, generating only minor competitive restraints in the process. Its conduct was eminently reasonable; no one has suggested a viable alternative.” In closing, even if one believes a more fulsome application of the rule of reason is called for before reaching the dissent’s conclusion, the dissent does a good job in highlighting the key considerations at play here – considerations that the majority utterly failed to address.

In sum, the Second Circuit panel majority wore jurisprudential blinders in its Apple decision. Like the mesmerized audience at a magic show, it focused in blinkered fashion on a magician’s sleight of hand (the one-dimensional characterization of certain uniform contractual terms), while not paying attention to what was really going on (the impressive welfare-enhancing invigoration of competition in e-book retailing). In other words, the majority decision showed a naïve preference for quick and superficial characterizations of conduct at the expense of a nuanced assessment of the broader competitive context. Perhaps the Second Circuit en banc will have the opportunity to correct the panel’s erroneous understanding of per se and rule of reason analysis. Even better, the Supreme Court may wish to step in to ensure that its thoughtful development of antitrust doctrine in recent years – focused on actual effects and economic efficiency, not on superficial condemnatory labels that ignore marketplace benefits – not be undermined.

The most welfare-inimical restrictions on competition stem from governmental action, and the Organization for Economic Cooperation and Development’s newly promulgated “Competition Assessment Toolkit, Volume 3: Operational Manual” (“Toolkit 3,” approved by the OECD in late June 2015) provides useful additional guidance on how to evaluate and tackle such harmful market distortions. Toolkit 3 is a very helpful supplement to the second and third volumes of the Competition Assessment Toolkit. Commendably, Toolkit 3 promotes itself generally as a tool that can be employed by well-intentioned governments, rather than merely marketing itself as a manual for advocacy by national competition agencies (which may lack the political clout to sell reforms to other government bureaucracies or to legislators). It is a succinct non-highly-technical document that can be used by a wide range of governments, and applied flexibly, in light of their resource constraints and institutional capacities. Let’s briefly survey Toolkit 3’s key provisions.

Toolkit 3 begins with a “competition checklist” that states that a competition assessment should be undertaken if a regulatory or legislative proposal has any one of four effects: (1) it limits the number or range of suppliers; (2) it limits the ability of suppliers to compete; (3) it reduces the incentive of suppliers to compete; or (4) it limits the choices and information available to consumers. The Toolkit then sets forth basic guidance on competition assessments in seven relatively short, clearly written chapters.

Chapter one begins by explaining that Toolkit 3 “shows how to assess laws, regulations, and policies for their competition effects, and how to revise regulations or policies to make them more procompetitive.” To that end, the chapter introduces the concept of market studies and sectoral reviews, and outlines a six-part process for carrying out competition assessments: (1) identify policies to assess; (2) apply the competition checklist (see above); (3) identify alternative options for achieving a policy objective; (4) select the best option; (5) implement the best option; and (6) review the impacts of an option once it has been implemented.

Chapter two provides general guidance on the selection of public policies for examination, with particular attention to the identification of sectors of the economy, that have the greatest restraints on competition and a major impact on economic output and efficiency.

Chapter three focuses on competition screening through use of threshold questions embodied in the four-part competition checklist. It also provides examples of the sorts of regulations that fall into each category covered by the checklist.

Chapter four sets forth guidance for the examination of potential restrictions that have been flagged for evaluation by the checklist. It provides indicators for deciding whether or not “in-depth analysis” is required, delineates specific considerations that should be brought to bear in conducting an analysis, and provides a detailed example of the steps to be followed in assessing a hypothetical drug patent law (beginning with a preliminary assessment, followed by a detailed analysis, and ending with key findings).

Chapter five centers on identifying the policy options that allow a policymaker to achieve a desired objective with a minimum distortion of competition. It discusses: (1) identifying the purpose of a policy; (2) identifying the competition problems caused by the policy under examination and whether it is necessary to achieve the desired objective; (3) evaluating the technical features of the subject matter being regulated; (4) accounting for features of the broader regulatory environment that have an effect on the market in question, in order to develop alternatives; (5) understanding changes in the business or market environment that have occurred since the last policy implementation; and (6) identifying specific techniques that allow an objective to be achieved with a minimum distortion of competition. The chapter closes by briefly describing various policy approaches for achieving a hypothetical desired reform objective (promotion of generic drug competition).

Chapter six provides guidance on comparing the policy options that have been identified. After summarizing background concepts, it discusses qualitative analysis, quantitative analysis, and the measurement of costs and benefits. The cost-benefits section is particularly thorough, delving into data gathering, techniques of measurement, estimates of values, adjustments to values, and accounting for risk and uncertainty. These tools are then applied to a specific hypothetical involving pharmaceutical regulation, featuring an assessment of the advantages and disadvantages of alternative options.

Chapter seven outlines the steps that should be taken in submitting a recommendation for government action. Those involve: (1) selecting the best policy option; (2) presenting the recommendation to a decision-maker; (3) drafting a regulation that is needed to effectuate the desired policy option; (4) obtaining final approval; and (5) implementing the regulation. The chapter closes by applying this framework to hypothetical regulations.

Chapter 8 discusses the performance of ex post evaluations of competition assessments, in order to determine whether the option chosen following the review process had the anticipated effects and was most appropriate. Four examples of ex post evaluations are summarized.

Toolkit 3 closes with a brief annex that describes mathematically and graphically the consumer benefits that arise when moving from a restrictive market equilibrium to a competitive equilibrium.

In sum, the release of Toolkit 3 is best seen as one more small step forward in the long-term fight against state-managed regulatory capitalism and cronyism, on a par with increased attention to advocacy initiatives within the International Competition Network and growing World Bank efforts to highlight the welfare harm due to governmental regulatory impediments. Although anticompetitive government market distortions will remain a huge problem for the foreseeable future, at least international organizations are starting to acknowledge their severity and to provide conceptual tools for combatting them. Now it is up to free market proponents to work in the trenches to secure the political changes needed to bring such distortions – and their rent-seeking advocates – to heel. This is a long-term fight, but well worth the candle.

Today, in Michigan v. EPA, a five-Justice Supreme Court majority (Antonin Scalia, joined by Chief Justice John Roberts, and Justices Anthony Kennedy, Clarence Thomas, and Samuel Alito, with Thomas issuing a separate concurrence) held that the Clean Air Act requires the Environmental Protection Agency (EPA) to consider costs, including the cost of compliance, when deciding whether to regulate hazardous air pollutants emitted by power plants.  The Clean Air Act, 42 U. S. C. §7412, authorizes the EPA to regulate emissions of hazardous air pollutants from certain stationary sources, such as refineries and factories.  The EPA may, however, regulate power plants under this program only if it concludes that such regulation is “appropriate and necessary” after studying hazards to public health posed by power-plant emissions, 42 U.S.C. §7412(n)(1)(A).  EPA determined that it was “appropriate and necessary” to regulate oil- and coal-fired power plants, because the plants’ emissions pose risks to public health and the environment and because controls capable of reducing these emissions were available.  (The EPA contended that its regulations would have ancillary benefits (including cutting power plants’ emissions of  particulate matter and sulfur dioxide) not covered by the hazardous air pollutants program, but conceded that its estimate of benefits “played no role” in its finding that regulation was “appropriate and necessary.”)  The EPA refused to consider costs when deciding to regulate, even though it estimated that the cost of its regulations to power plants would be $9.6 billion a year, but the quantifiable benefits from the resulting reduction in hazardous-air-pollutant emissions would be $4 to $6 million a year.  Twenty-three states challenged the EPA’s refusal to consider cost, but the U.S. Court of Appeals for the D.C. Circuit upheld the agency’s decision not to consider costs at the outset.  In reversing the D.C. Circuit, the Court stressed that EPA strayed well beyond the bounds of reasonable interpretation in concluding that cost is not a factor relevant to the appropriateness of regulating power plants.  Read naturally against the backdrop of established administrative law, the phrase “appropriate and necessary” plainly encompasses cost, according to the Court.

In a concurring opinion, Justice Thomas opined that this case “raises serious questions about the constitutionality of our broader practice of deferring to agency interpretations of federal statutes.”  Justice Elena Kagan, joined by Justices Ruth Bader Ginsburg, Stephen Breyer, and Sonya Sotomayor, dissented, reasoning that EPA “acted well within its authority in declining to consider costs at the [beginning] . . . of the regulatory process given that it would do so in every round thereafter.”

Although the Supreme Court’s holding merits praise, it is inherently limited in scope, and should not be expected to significantly constrain regulatory overreach, whether by the EPA or by other agencies.  First, in remanding the case, the Court did not opine on the precise manner in which costs and benefits should be evaluated, potentially leaving EPA broad latitude to try to reach its desired regulatory result with a bit of “cost-benefit” wordsmithing.  Such a result would not be surprising, given that “[t]he U.S. Government has a strong tendency to overregulate.  More specifically, administrative agencies such as EPA, whose staffs are dominated by regulatorily-minded permanent bureaucrats, will have every incentive to skew judicially-required “cost assessments” to justify their actions – based on, for example, “false assumptions and linkages, black-box computer models, secretive collusion with activist groups, outright deception, and supposedly ‘scientific’ reports whose shady data and methodologies the agency refuses to share with industries, citizens or even Congress.”  Since, as a practical matter, appellate courts have neither the resources nor the capacity to sort out legitimate from illegitimate agency claims that regulatory programs truly meet cost-benefit standards, it would be naïve to believe that the Court’s majority opinion will be able to do much to rein in the federal regulatory behemoth.

What, then, is the solution?  The concern that federal administrative agencies are being allowed to arrogate to themselves inherently executive and judicial functions, a theme previously stressed by Justice Thomas, has not led other justices to call for wide-scale judicial nullification or limitation of expansive agency regulatory findings.  Absent an unexpected Executive Branch epiphany, then, the best bet for reform lies primarily in congressional action.

What sort of congressional action?  The Heritage Foundation has described actions needed to help stem the tide of overregulation:  (1) require congressional approval of new major regulations promulgated by agencies; (2) establish a sunset date for federal regulations; (3) subject “independent” agencies to executive branch regulatory review; and (4) develop a congressional regulatory analysis capability.  Legislative proposals such as the REINS Act (Regulations from the Executive in Need of Scrutiny Act of 2015), would meet the first objective, while other discrete measures could advance the other three goals.  Public choice considerations suggest that these reforms will not be easily achieved (beneficiaries of the intrusive regulatory status quo may be expected to vigorously oppose reform), but they nevertheless should be pursued posthaste.

imageI am of two minds when it comes to the announcement today that the NYC taxi commission will permit companies like Uber and Lyft to update, when the companies wish, the mobile apps that serve as the front end for the ridesharing platforms.

My first instinct is to breathe a sigh of relief that even the NYC taxi commission eventually rejected the patently ridiculous notion that an international technology platform should have its update schedule in anyway dictated by the parochial interests of a local transportation fiefdom.

My second instinct is to grit my teeth in frustration that, in the face of the overwhelming transformation going on in the world today because of technology platforms offered by the likes of Uber and Lyft, anyone would even think to ask the question “should I ask the NYC taxi commission whether or not I can update the app on my users’ smartphones?”

That said, it’s important to take the world as you find it, not as you wish it to be, and so I want to highlight some items from the decision that deserve approbation.

Meera Josh, the NYC Taxi Commission chairperson and CEO, had this to say of the proposed rule:

We re-stylized the rules so they’re tech agnostic because our point is not to go after one particular technology – things change quicker than we do – it’s to provide baseline consumer protection and driver safety requirements[.]

I love that the commission gets this. The real power in the technology that drives the sharing economy is that it can change quickly in response to consumer demand. Further, regulators can offer value to these markets only when they understand that the nature of work and services are changing, and that their core justification as consumer protection agencies necessarily requires them to adjust when and how they intervene.

Although there is always more work to be done to make room for these entrepreneurial platforms (for instance, the NYC rules appear to require that all on-demand drivers – including the soccer mom down the street driving for Lyft – be licensed through the commission), this is generally forward-thinking. I hope that more municipalities across the country take notice, and that the relevant regulators follow suit in repositioning themselves as partners with these innovative companies.