In a recent post at the (appallingly misnamed) ProMarket blog (the blog of the Stigler Center at the University of Chicago Booth School of Business — George Stigler is rolling in his grave…), Marshall Steinbaum keeps alive the hipster-antitrust assertion that lax antitrust enforcement — this time in the labor market — is to blame for… well, most? all? of what’s wrong with “the labor market and the broader macroeconomic conditions” in the country.

In this entry, Steinbaum takes particular aim at the US enforcement agencies, which he claims do not consider monopsony power in merger review (and other antitrust enforcement actions) because their current consumer welfare framework somehow doesn’t recognize monopsony as a possible harm.

This will probably come as news to the agencies themselves, whose Horizontal Merger Guidelines devote an entire (albeit brief) section (section 12) to monopsony, noting that:

Mergers of competing buyers can enhance market power on the buying side of the market, just as mergers of competing sellers can enhance market power on the selling side of the market. Buyer market power is sometimes called “monopsony power.”

* * *

Market power on the buying side of the market is not a significant concern if suppliers have numerous attractive outlets for their goods or services. However, when that is not the case, the Agencies may conclude that the merger of competing buyers is likely to lessen competition in a manner harmful to sellers.

Steinbaum fails to mention the HMGs, but he does point to a US submission to the OECD to make his point. In that document, the agencies state that

The U.S. Federal Trade Commission (“FTC”) and the Antitrust Division of the Department of Justice (“DOJ”) [] do not consider employment or other non-competition factors in their antitrust analysis. The antitrust agencies have learned that, while such considerations “may be appropriate policy objectives and worthy goals overall… integrating their consideration into a competition analysis… can lead to poor outcomes to the detriment of both businesses and consumers.” Instead, the antitrust agencies focus on ensuring robust competition that benefits consumers and leave other policies such as employment to other parts of government that may be specifically charged with or better placed to consider such objectives.

Steinbaum, of course, cites only the first sentence. And he uses it as a launching-off point to attack the notion that antitrust is an improper tool for labor market regulation. But if he had just read a little bit further in the (very short) document he cites, Steinbaum might have discovered that the US antitrust agencies have, in fact, challenged the exercise of collusive monopsony power in labor markets. As footnote 19 of the OECD submission notes:

Although employment is not a relevant policy goal in antitrust analysis, anticompetitive conduct affecting terms of employment can violate the Sherman Act. See, e.g., DOJ settlement with eBay Inc. that prevents the company from entering into or maintaining agreements with other companies that restrain employee recruiting or hiring; FTC settlement with ski equipment manufacturers settling charges that companies illegally agreed not to compete for one another’s ski endorsers or employees. (Emphasis added).

And, ironically, while asserting that labor market collusion doesn’t matter to the agencies, Steinbaum himself points to “the Justice Department’s 2010 lawsuit against Silicon Valley employers for colluding not to hire one another’s programmers.”

Steinbaum instead opts for a willful misreading of the first sentence of the OECD submission. But what the OECD document refers to, of course, are situations where two firms merge, no market power is created (either in input or output markets), but people are laid off because the merged firm does not need all of, say, the IT and human resources employees previously employed in the pre-merger world.

Does Steinbaum really think this is grounds for challenging the merger on antitrust grounds?

Actually, his post suggests that he does indeed think so, although he doesn’t come right out and say it. What he does say — as he must in order to bring antitrust enforcement to bear on the low- and unskilled labor markets (e.g., burger flippers; retail cashiers; Uber drivers) he purports to care most about — is that:

Employers can have that control [over employees, as opposed to independent contractors] without first establishing themselves as a monopoly—in fact, reclassification [of workers as independent contractors] is increasingly standard operating procedure in many industries, which means that treating it as a violation of Section 2 of the Sherman Act should not require that outright monopolization must first be shown. (Emphasis added).

Honestly, I don’t have any idea what he means. Somehow, because firms hire independent contractors where at one time long ago they might have hired employees… they engage in Sherman Act violations, even if they don’t have market power? Huh?

I get why he needs to try to make this move: As I intimated above, there is probably not a single firm in the world that hires low- or unskilled workers that has anything approaching monopsony power in those labor markets. Even Uber, the example he uses, has nothing like monopsony power, unless perhaps you define the market (completely improperly) as “drivers already working for Uber.” Even then Uber doesn’t have monopsony power: There can be no (or, at best, virtually no) markets in the world where an Uber driver has no other potential employment opportunities but working for Uber.

Moreover, how on earth is hiring independent contractors evidence of anticompetitive behavior? ”Reclassification” is not, in fact, “standard operating procedure.” It is the case that in many industries firms (unilaterally) often decide to contract out the hiring of low- and unskilled workers over whom they do not need to exercise direct oversight to specialized firms, thus not employing those workers directly. That isn’t “reclassification” of existing workers who have no choice but to accept their employer’s terms; it’s a long-term evolution of the economy toward specialization, enabled in part by technology.

And if we’re really concerned about what “employee” and “independent contractor” mean for workers and employment regulation, we should reconsider those outdated categories. Firms are faced with a binary choice: hire workers or independent contractors. Neither really fits many of today’s employment arrangements very well, but that’s the choice firms are given. That they sometimes choose “independent worker” over “employee” is hardly evidence of anticompetitive conduct meriting antitrust enforcement.

The point is: The notion that any of this is evidence of monopsony power, or that the antitrust enforcement agencies don’t care about monopsony power — because, Bork! — is absurd.

Even more absurd is the notion that the antitrust laws should be used to effect Steinbaum’s preferred market regulations — independent of proof of actual anticompetitive effect. I get that it’s hard to convince Congress to pass the precise laws you want all the time. But simply routing around Congress and using the antitrust statutes as a sort of meta-legislation to enact whatever happens to be Marshall Steinbaum’s preferred regulation du jour is ridiculous.

Which is a point the OECD submission made (again, if only Steinbaum had read beyond the first sentence…):

[T]wo difficulties with expanding the scope of antitrust analysis to include employment concerns warrant discussion. First, a full accounting of employment effects would require consideration of short-term effects, such as likely layoffs by the merged firm, but also long-term effects, which could include employment gains elsewhere in the industry or in the economy arising from efficiencies generated by the merger. Measuring these effects would [be extremely difficult.]. Second, unless a clear policy spelling out how the antitrust agency would assess the appropriate weight to give employment effects in relation to the proposed conduct or transaction’s procompetitive and anticompetitive effects could be developed, [such enforcement would be deeply problematic, and essentially arbitrary].

To be sure, the agencies don’t recognize enough that they already face the problem of reconciling multidimensional effects — e.g., short-, medium-, and long-term price effects, innovation effects, product quality effects, etc. But there is no reason to exacerbate the problem by asking them to also consider employment effects. Especially not in Steinbaum’s world in which certain employment effects are problematic even without evidence of market power or even actual anticompetitive harm, just because he says so.

Consider how this might play out:

Suppose that Pepsi, Coca-Cola, Dr. Pepper… and every other soft drink company in the world attempted to merge, creating a monopoly soft drink manufacturer. In what possible employment market would even this merger create a monopsony in which anticompetitive harm could be tied to the merger? In the market for “people who know soft drink secret formulas?” Yet Steinbaum would have the Sherman Act enforced against such a merger not because it might create a product market monopoly, but because the existence of a product market monopoly means the firm must be able to bad things in other markets, as well. For Steinbaum and all the other scolds who see concentration as the source of all evil, the dearth of evidence to support such a claim is no barrier (on which, see, e.g., this recent, content-less NYT article (that, naturally, quotes Steinbaum) on how “big business may be to blame” for the slowing rate of startups).

The point is, monopoly power in a product market does not necessarily have any relationship to monopsony power in the labor market. Simply asserting that it does — and lambasting the enforcement agencies for not just accepting that assertion — is farcical.

The real question, however, is what has happened to the University of Chicago that it continues to provide a platform for such nonsense?

Last Friday, drug maker Allergan and the Saint Regis Mohawk Tribe announced that they had reached an agreement under which Allergan assigned the patents on its top-selling drug Restasis to the tribe and, in return, Allergan was given the exclusive license on the Restasis patents so that it can continue producing and distributing the drug.  Allergan agreed to pay $13.75 million to the tribe for the deal, and up to $15 million annually in royalties as long as the patents remain valid.

Why would a large drug maker assign the patents on a leading drug to a sovereign Indian nation?  This unorthodox agreement may actually be a brilliant strategy that enables patent owners to avoid the unbalanced inter partes review (IPR) process.  The validity of the Restasis patents is currently being challenged both in IPR proceedings before the Patent Trial and Appeal Board (PTAB) and in federal district court in Texas.  However, the Allergan-Mohawk deal may lead to the dismissal of the IPR proceedings as, under the terms of the deal, the Mohawks will file a motion to dismiss the IPR proceedings based on the tribe’s sovereign immunity.  Earlier this year, in Covidien v. University of Florida Research Foundation, the PTAB determined that sovereign immunity shields state universities holding patents from IPR proceedings, and the same reasoning should certainly apply to sovereign Indian nations.

I’ve published a previous article explaining why pharmaceutical companies have legitimate reasons to avoid IPR proceedings–critical differences between district court litigation and IPR proceedings jeopardize the delicate balance Hatch-Waxman sought to achieve between patent owners and patent challengers. In addition to forcing patent owners into duplicative litigation in district courts and the PTAB, depriving them of the ability to achieve finality in one proceeding, the PTAB also applies a lower standard of proof for invalidity than do district courts in Hatch-Waxman litigation.  It is also easier to meet the standard of proof in a PTAB trial because of a more lenient claim construction standard.  Moreover, on appeal, PTAB decisions in IPR proceedings are given more deference than lower district court decisions.  Finally, while patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement.  This has led to the exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

The differences between district court litigation and IPR proceedings have created a significant deviation in patent invalidation rates under the two pathways; compared to district court challenges, patents are twice as likely to be found invalid in IPR challenges.  Although the U.S. Supreme Court in Cuozzo Speed Technologies v. Lee concluded that the anti-patentee claim construction standard in IPR “increases the possibility that the examiner will find the claim too broad (and deny it)”, the Court concluded that only Congress could mandate a different standard.  So far, Congress has done nothing to reduce the disparities between IPR proceedings and Hatch-Waxman litigation. But, while we wait, the high patent invalidation rate in IPR proceedings creates significant uncertainty for patent owners’ intellectual property rights.   Uncertain patent rights, in turn, lead to less innovation in the pharmaceutical industry.  Put simply, drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they can’t be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them (for an excellent discussion of how the PTAB threatens innovation, see Alden Abbot’s recent TOTM post).  Thus, deals between brand companies and sovereigns, such as Indian nations, that insulate patents from IPR proceedings should improve the certainty around intellectual property rights and protect drug innovation.

Yet, the response to the Allergan-Mohawk deal among some scholars and generic drug companies has been one of panic and speculative doom.  Critics have questioned the deal largely on the grounds that, in addition to insulating Restasis from IPR proceedings, tribal sovereignty might also shield the patents in standard Hatch-Waxman district court litigation.  If this were true and brand companies began to routinely house their patents with sovereign Indian nations, then the venues in which generic companies could challenge patents would be restricted and generic companies would have less incentive to produce and market cheaper drugs.

However, it is far from clear that these deals could shield patents in standard Hatch-Waxman district court litigation.  Hatch-Waxman litigation typically follows a familiar pattern: a generic company files a Paragraph IV ANDA alleging patent owner’s patents are invalid or will not be infringed, the patent owner then sues the generic for infringement, and then the generic company files a counterclaim for invalidity.  Critics of the Allergan-Mohawk deal allege that tribal sovereignty could insulate patent owners from the counterclaim.  However, courts have held that state universities waive sovereign immunity for counterclaims when they file the initial patent infringement suit.  Although, in non-infringement contexts, tribes have been found to not waive sovereign immunity for counterclaims merely by filing an action as a plaintiff, this has never been tested in patent litigation.  Moreover, even if sovereign immunity could be used to prevent the counterclaim, invalidity can still be raised as an affirmative defense in the patent owner’s infringement suit (although it has been asserted that requiring generics to assert invalidity as an affirmative defense instead of a counterclaim may still tilt the playing field toward patent owners).  Finally, many patent owners that are sovereigns may choose to voluntarily waive sovereign immunity to head off any criticism or congressional meddling. Given the uncertainty of the effects of tribal sovereignty in Hatch-Waxman litigation, Allergan has concluded that their deal with the Mohawks won’t affect the pending district court litigation involving the validity of the Restasis patents.  However, if tribes in future cases were to cloud the viability of Hatch-Waxman by asserting sovereign immunity in district court litigation, Congress could always respond by altering the Hatch-Waxman rules to preclude this.

For now, we should all take a deep breath and put the fearmongering on hold.  Whether deals like the Allergan-Mohawk arrangement could affect Hatch-Waxman litigation is simply a matter of speculation, and there are many reasons to believe that they won’t. In the meantime, the deal between Allergan and the Saint Regis Mohawk Tribe is an ingenious strategy to avoid the unbalanced IPR process.   This move is the natural extension of the PTAB’s ruling on state university sovereign immunity, and state universities are likely incorporating the advantage into their own licensing and litigation strategies.  The Supreme Court will soon hear a case questioning the constitutionality of the IPR process.  Until the courts or Congress act to reduce the disparities between IPR proceedings and Hatch-Waxman litigation, we can hardly blame patent owners from taking clever legal steps to avoid the unbalanced IPR process.

On August 14, the Federalist Society’s Regulatory Transparency Project released a report detailing the harm imposed on innovation and property rights by the Patent Trial and Appeals Board, a Patent and Trademark Office patent review agency created by the infelicitously-named “America Invents Act” of 2011.  As the report’s abstract explains:

Patents are property rights secured to inventors of new products or services, such as the software and other high-tech innovations in our laptops and smart phones, the life-saving medicines prescribed by our doctors, and the new mechanical designs that make batteries more efficient and airplane engines more powerful. Many Americans first learn in school about the great inventors who revolutionized our lives with their patented innovations, such as Thomas Edison (the light bulb and record player), Alexander Graham Bell (the telephone), Nikola Tesla (electrical systems), the Wright brothers (airplanes), Charles Goodyear (cured rubber), Enrico Fermi (nuclear power), and Samuel Morse (the telegraph). These inventors and tens of thousands of others had the fruits of their inventive labors secured to them by patents, and these vital property rights have driven America’s innovation economy for over 225 years. For this reason, the United States has long been viewed as having the “gold standard” patent system throughout the world.

In 2011, Congress passed a new law, called the America Invents Act (AIA), that made significant changes to the U.S. patent system. Among its many changes, the AIA created a new administrative tribunal for invalidating “bad patents” (patents mistakenly issued because the claimed inventions were not actually new or because they suffer from other defects that create problems for companies in the innovation economy). This administrative tribunal is called the Patent Trial & Appeal Board (PTAB). The PTAB is composed of “administrative patent judges” appointed by the Director of the United States Patent & Trademark Office (USPTO). The PTAB administrative judges are supposed to be experts in both technology and patent law. They hold administrative hearings in response to petitions that challenge patents as defective. If they agree with the challenger, they cancel the patent by declaring it “invalid.” Anyone in the world willing to pay a filing fee can file a petition to invalidate any patent.

As many people are aware, administrative agencies can become a source of costs and harms that far outweigh the harms they were created to address. This is exactly what has happened with the PTAB. This administrative tribunal has become a prime example of regulatory overreach

Congress created the PTAB in 2011 in response to concerns about the quality of patents being granted to inventors by the USPTO. Legitimate patents promote both inventive activity and the commercial development of inventions into real-world innovation used by regular people the world over. But “bad patents” clog the intricate gears of the innovation economy, deterring real innovators and creating unnecessary costs for companies by enabling needless and wasteful litigation. The creation of the PTAB was well intended: it was supposed to remove bad patents from the innovation economy. But the PTAB has ended up imposing tremendous and unnecessary costs and creating destructive uncertainty for the innovation economy.

In its procedures and its decisions, the PTAB has become an example of an administrative tribunal run amok. It does not provide basic legal procedures to patent owners that all other property owners receive in court. When called upon to redress these concerns, the courts have instead granted the PTAB the same broad deference they have given to other administrative agencies. Thus, these problems have gone uncorrected and unchecked. Without providing basic procedural protections to all patent owners, the PTAB has gone too far with its charge of eliminating bad patents. It is now invalidating patents in a willy-nilly fashion. One example among many is that, in early 2017, the PTAB invalidated a patent on a new MRI machine because it believed this new medical device was an “abstract idea” (and thus unpatentable).

The problems in the PTAB’s operations have become so serious that a former federal appellate chief judge has referred to PTAB administrative judges as “patent death squads.” This metaphor has proven apt, even if rhetorically exaggerated. Created to remove only bad patents clogging the innovation economy, the PTAB has itself begun to clog innovation — killing large numbers of patents and casting a pall of uncertainty over every patent that might become valuable and thus a target of a PTAB petition to invalidate it.

The U.S. innovation economy has thrived because inventors know they can devote years of productive labor and resources into developing their inventions for the marketplace, secure in the knowledge that their patents provide a solid foundation for commercialization. Pharmaceutical companies depend on their patents to recoup billions of dollars in research and development of new drugs. Venture capitalists invest in startups on the basis of these vital property rights in new products and services, as viewers of Shark Tank see every week.

The PTAB now looms over all of these inventive and commercial activities, threatening to cancel a valuable patent at any moment and without rhyme or reason. In addition to the lost investments in the invalidated patents themselves, this creates uncertainty for inventors and investors, undermining the foundations of the U.S. innovation economy.

This paper explains how the PTAB has become a prime example of regulatory overreach. The PTAB administrative tribunal is creating unnecessary costs for inventors and companies, and thus it is harming the innovation economy far beyond the harm of the bad patents it was created to remedy. First, we describe the U.S. patent system and how it secures property rights in technological innovation. Second, we describe Congress’s creation of the PTAB in 2011 and the six different administrative proceedings the PTAB uses for reviewing and canceling patents. Third, we detail the various ways that the PTAB is now causing real harm, through both its procedures and its substantive decisions, and thus threatening innovation.

The PTAB has created fundamental uncertainty about the status of all patent rights in inventions. The result is that the PTAB undermines the market value of patents and frustrates the role that these property rights serve in the investment in and commercial development of the new technological products and services that make many aspects of our modern lives seem like miracles.

In June 2017, the U.S. Supreme Court agreed to review the Oil States Energy case, raising the question of whether PTAB patent review “violates the Constitution by extinguishing private property rights through a non-Article III forum without a jury.”  A Supreme Court finding of unconstitutionality would be ideal.  But in the event the Court leaves PTAB patent review intact, legislation to curb the worst excesses of PTAB – such as the bipartisan “STRONGER Patent Act of 2017” – merits serious consideration.  Stay tuned – I will have more to say in detail about potential patent law reforms, including the reining in of PTAB, in the near future.

On July 24, as part of their newly-announced “Better Deal” campaign, congressional Democrats released an antitrust proposal (“Better Deal Antitrust Proposal” or BDAP) entitled “Cracking Down on Corporate Monopolies and the Abuse of Economic and Political Power.”  Unfortunately, this antitrust tract is really an “Old Deal” screed that rehashes long-discredited ideas about “bigness is badness” and “corporate abuses,” untethered from serious economic analysis.  (In spirit it echoes the proposal for a renewed emphasis on “fairness” in antitrust made by then Acting Assistant Attorney General Renata Hesse in 2016 – a recommendation that ran counter to sound economics, as I explained in a September 2016 Truth on the Market commentary.)  Implementation of the BDAP’s recommendations would be a “worse deal” for American consumers and for American economic vitality and growth.

The BDAP’s Portrayal of the State of Antitrust Enforcement is Factually Inaccurate, and it Ignores the Real Problems of Crony Capitalism and Regulatory Overreach

The Better Deal Antitrust Proposal begins with the assertion that antitrust has failed in recent decades:

Over the past thirty years, growing corporate influence and consolidation has led to reductions in competition, choice for consumers, and bargaining power for workers.  The extensive concentration of power in the hands of a few corporations hurts wages, undermines job growth, and threatens to squeeze out small businesses, suppliers, and new, innovative competitors.  It means higher prices and less choice for the things the American people buy every day. . .  [This is because] [o]ver the last thirty years, courts and permissive regulators have allowed large companies to get larger, resulting in higher prices and limited consumer choice in daily expenses such as travel, cable, and food and beverages.  And because concentrated market power leads to concentrated political power, these companies deploy armies of lobbyists to increase their stranglehold on Washington.  A Better Deal on competition means that we will revisit our antitrust laws to ensure that the economic freedom of all Americans—consumers, workers, and small businesses—come before big corporations that are getting even bigger.

This statement’s assertions are curious (not to mention problematic) in multiple respects.

First, since Democratic administrations have held the White House for sixteen of the past thirty years, the BDAP appears to acknowledge that Democratic presidents have overseen a failed antitrust policy.

Second, the broad claim that consumers have faced higher prices and limited consumer choice with regard to their daily expenses is baseless.  Indeed, internet commerce and new business models have sharply reduced travel and entertainment costs for the bulk of American consumers, and new “high technology” products such as smartphones and electronic games have been characterized by dramatic improvements in innovation, enhanced variety, and relatively lower costs.  Cable suppliers face vibrant competition from competitive satellite providers, fiberoptic cable suppliers (the major telcos such as Verizon), and new online methods for distributing content.  Consumer price inflation has been extremely low in recent decades, compared to the high inflationary, less innovative environment of the 1960s and 1970s – decades when federal antitrust law was applied much more vigorously.  Thus, the claim that weaker antitrust has denied consumers “economic freedom” is at war with the truth.

Third, the claim that recent decades have seen the creation of “concentrated market power,” safe from antitrust challenge, ignores the fact that, over the last three decades, apolitical government antitrust officials under both Democratic and Republican administrations have applied well-accepted economic tools (wielded by the scores of Ph.D. economists in the Justice Department and Federal Trade Commission) in enforcing the antitrust laws.  Antitrust analysis has used economics to focus on inefficient business conduct that would maintain or increase market power, and large numbers of cartels have been prosecuted and questionable mergers (including a variety of major health care and communications industry mergers) have been successfully challenged.  The alleged growth of “concentrated market power,” untouched by incompetent antitrust enforcers, is a myth.  Furthermore, claims that mere corporate size and “aggregate concentration” are grounds for antitrust concern (“big is bad”) were decisively rejected by empirical economic research published in the 1970s, and are no more convincing today.  (As I pointed out in a January 2017 blog posting at this site, recent research by highly respected economists debunks a few claims that federal antitrust enforcers have been “excessively tolerant” of late in analyzing proposed mergers.)

More interesting is the BDAP’s claim that “armies of [corporate] lobbyists” manage to “increase their stranglehold on Washington.”  This is not an antitrust concern, however, but, rather, a complaint against crony capitalism and overregulation, which became an ever more serious problem under the Obama Administration.  As I explained in my October 2016 critique of the American Antitrust Institute’s September 2008 National Competition Policy Report (a Report which is very similar in tone to the BDAP), the rapid growth of excessive regulation during the Obama years has diminished competition by creating new regulatory schemes that benefit entrenched and powerful firms (such as Dodd-Frank Act banking rules that impose excessive burdens on smaller banks).  My critique emphasized that, “as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.”  And, more generally, excessive regulatory burdens undermine the competitive process, by distorting business decisions in a manner that detracts from competition on the merits.

It follows that, if the BDAP really wanted to challenge “unfair” corporate advantages, it would seek to roll back excessive regulation (see my November 2012 article on Trump Administration competition policy).  Indeed, the Trump Administration’s regulatory reform program (which features agency-specific regulatory reform task forces) seeks to do just that.  Perhaps then the BDAP could be rewritten to focus on endorsing President Trump’s regulatory reform initiative, rather than emphasizing a meritless “big is bad” populist antitrust policy that was consigned to the enforcement dustbin decades ago.

The BDAP’s Specific Proposals Would Harm the Economy and Reduce Consumer Welfare

Unfortunately, the BDAP does more than wax nostalgic about old-time “big is bad” antitrust policy.  It affirmatively recommends policy changes that would harm the economy.

First, the BDAP would require “a broader, longer-term view and strong presumptions that market concentration can result in anticompetitive conduct.”  Specifically, it would create “new standards to limit large mergers that unfairly consolidate corporate power,” including “mergers [that] reduce wages, cut jobs, lower product quality, limit access to services, stifle innovation, or hinder the ability of small businesses and entrepreneurs to compete.”  New standards would also “explicitly consider the ways in which control of consumer data can be used to stifle competition or jeopardize consumer privacy.”

Unlike current merger policy, which evaluates likely competitive effects, centered on price and quality, estimated in economically relevant markets, these new standards are open-ended.  They could justify challenges based on such a wide variety of factors that they would incentivize direct competitors not to merge, even in cases where the proposed merged entity would prove more efficient and able to enhance quality or innovation.  Certain less efficient competitors – say small businesses – could argue that they would be driven out of business, or that some jobs in the industry would disappear, in order to prompt government challenges.  But such challenges would tend to undermine innovation and business improvements, and the inevitable redistribution of assets to higher-valued uses that is a key benefit of corporate reorganizations and acquisitions.  (Mergers might focus instead, for example, on inefficient conglomerate acquisitions among companies in unrelated industries, which were incentivized by the overly strict 1960s rules that prohibited mergers among direct competitors.)  Such a change would represent a retreat from economic common sense, and be at odds with consensus economically-sound merger enforcement guidance that U.S. enforcers have long recommended other countries adopt.  Furthermore, questions of consumer data and privacy are more appropriately dealt with as consumer protection questions, which the Federal Trade Commission has handled successfully for years.

Second, the BDAP would require “frequent, independent [after-the-fact] reviews of mergers” and require regulators “to take corrective measures if they find abusive monopolistic conditions where previously approved [consent decree] measures fail to make good on their intended outcomes.”

While high profile mergers subject to significant divestiture or other remedial requirements have in appropriate circumstances included monitoring requirements, the tone of this recommendation is to require that far more mergers be subjected to detailed and ongoing post-acquisition reviews.  The cost of such monitoring is substantial, however, and routine reliance on it (backed by the threat of additional enforcement actions based merely on changing economic conditions) could create excessive caution in the post-merger management of newly-consolidated enterprises.  Indeed, potential merged parties might decide in close cases that this sort of oversight is not worth accepting, and therefore call off potentially efficient transactions that would have enhanced economic welfare.  (The reality of enforcement error cost, and the possibility of misdiagnosis of post-merger competitive conditions, is not acknowledged by the BDAP.)

Third, a newly created “competition advocate” independent of the existing federal antitrust enforcers would be empowered to publicly recommend investigations, with the enforcers required to justify publicly why they chose not to pursue a particular recommended investigation.  The advocate would ensure that antitrust enforcers are held “accountable,” assure that complaints about “market exploitation and anticompetitive conduct” are heard, and publish data on “concentration and abuses of economic power” with demographic breakdowns.

This third proposal is particularly egregious.  It is at odds with the long tradition of prosecutorial discretion that has been enjoyed by the federal antitrust enforcers (and law enforcers in general).  It would also empower a special interest intervenor to promote the complaints of interest groups that object to efficiency-seeking business conduct, thereby undermining the careful economic and legal analysis that is consistently employed by the expert antitrust agencies.  The references to “concentration” and “economic power” clarify that the “advocate” would have an untrammeled ability to highlight non-economic objections to transactions raised by inefficient competitors, jealous rivals, or self-styled populists who object to excessive “bigness.”  This would strike at the heart of our competitive process, which presumes that private parties will be allowed to fulfill their own goals, free from government micromanagement, absent indications of a clear and well-defined violation of law.  In sum, the “competition advocate” is better viewed as a “special interest” advocate empowered to ignore normal legal constraints and unjustifiably interfere in business transactions.  If empowered to operate freely, such an advocate (better viewed as an albatross) would undoubtedly chill a wide variety of business arrangements, to the detriment of consumers and economic innovation.

Finally, the BDAP refers to a variety of ills that are said to affect specific named industries, in particular airlines, cable/telecom, beer, food prices, and eyeglasses.  Airlines are subject to a variety of capacity limitations (limitations on landing slots and the size/number of airports) and regulatory constraints (prohibitions on foreign entry or investment) that may affect competitive conditions, but airlines mergers are closely reviewed by the Justice Department.  Cable and telecom companies face a variety of federal, state, and local regulations, and their mergers also are closely scrutinized.  The BDAP’s reference to the proposed AT&T/Time Warner merger ignores the potential efficiencies of this “vertical” arrangement involving complementary assets (see my coauthored commentary here), and resorts to unsupported claims about wrongful “discrimination” by “behemoths” – issues that in any event are examined in antitrust merger reviews.  Unsupported references to harm to competition and consumer choice are thrown out in the references to beer and agrochemical mergers, which also receive close economically-focused merger scrutiny under existing law.  Concerns raised about the price of eyeglasses ignore the role of potentially anticompetitive regulation – that is, bad government – in harming consumer welfare in this sector.  In short, the alleged competitive “problems” the BDAP raises with respect to particular industries are no more compelling than the rest of its analysis.  The Justice Department and Federal Trade Commission are hard at work applying sound economics to these sectors.  They should be left to do their jobs, and the BDAP’s industry-specific commentary (sadly, like the rest of its commentary) should be accorded no weight.

Conclusion

Congressional Democrats would be well-advised to ditch their efforts to resurrect the counterproductive antitrust policy from days of yore, and instead focus on real economic problems, such as excessive and inappropriate government regulation, as well as weak protection for U.S. intellectual property rights, here and abroad (see here, for example).  Such a change in emphasis would redound to the benefit of American consumers and producers.

 

 

My new book, How to Regulate: A Guide for Policymakers, will be published in a few weeks.  A while back, I promised a series of posts on the book’s key chapters.  I posted an overview of the book and a description of the book’s chapter on externalities.  I then got busy on another writing project (on horizontal shareholdings—more on that later) and dropped the ball.  Today, I resume my book summary with some thoughts from the book’s chapter on public goods.

With most goods, the owner can keep others from enjoying what she owns, and, if one person enjoys the good, no one else can do so.  Consider your coat or your morning cup of Starbucks.  You can prevent me from wearing your coat or drinking your coffee, and if you choose to let me wear the coat or drink the coffee, it’s not available to anyone else.

There are some amenities, though, that are “non-excludable,” meaning that the owner can’t prevent others from enjoying them, and “non-rivalrous,” meaning that one person’s consumption of them doesn’t prevent others from enjoying them as well.  National defense and local flood control systems (levees, etc.) are like this.  So are more mundane things like public art projects and fireworks displays.  Amenities that are both non-excludable and non-rivalrous are “public goods.”

[NOTE:  Amenities that are either non-excludable or non-rivalrous, but not both, are “quasi-public goods.”  Such goods include excludable but non-rivalrous “club goods” (e.g., satellite radio programming) and non-excludable but rivalrous “commons goods” (e.g., public fisheries).  The public goods chapter of How to Regulate addresses both types of quasi-public goods, but I won’t discuss them here.]

The primary concern with public goods is that they will be underproduced.  That’s because the producer, who must bear all the cost of producing the good, cannot exclude benefit recipients who do not contribute to the good’s production and thus cannot capture many of the benefits of his productive efforts.

Suppose, for example, that a levee would cost $5 million to construct and would create $10 million of benefit by protecting 500 homeowners from expected losses of $20,000 each (i.e., the levee would eliminate a 10% chance of a big flood that would cause each homeowner a $200,000 loss).  To maximize social welfare, the levee should be built.  But no single homeowner has an incentive to build the levee.  At least 250 homeowners would need to combine their resources to make the levee project worthwhile for participants (250 * $20,000 in individual benefit = $5 million), but most homeowners would prefer to hold out and see if their neighbors will finance the levee project without their help.  The upshot is that the levee never gets built, even though its construction is value-enhancing.

Economists have often jumped from the observation that public goods are susceptible to underproduction to the conclusion that the government should tax people and use the revenues to provide public goods.  Consider, for example, this passage from a law school textbook by several renowned economists:

It is apparent that public goods will not be adequately supplied by the private sector. The reason is plain: because people can’t be excluded from using public goods, they can’t be charged money for using them, so a private supplier can’t make money from providing them. … Because public goods are generally not adequately supplied by the private sector, they have to be supplied by the public sector.

[Howell E. Jackson, Louis Kaplow, Steven Shavell, W. Kip Viscusi, & David Cope, Analytical Methods for Lawyers 362-63 (2003) (emphasis added).]

That last claim seems demonstrably false.   Continue Reading…

On July 10, the Consumer Financial Protection Bureau (CFPB) announced a new rule to ban financial service providers, such as banks or credit card companies, from using mandatory arbitration clauses to deny consumers the opportunity to participate in a class action (“Arbitration Rule”).  The Arbitration Rule’s summary explains:

First, the final rule prohibits covered providers of certain consumer financial products and services from using an agreement with a consumer that provides for arbitration of any future dispute between the parties to bar the consumer from filing or participating in a class action concerning the covered consumer financial product or service. Second, the final rule requires covered providers that are involved in an arbitration pursuant to a pre-dispute arbitration agreement to submit specified arbitral records to the Bureau and also to submit specified court records. The Bureau is also adopting official interpretations to the regulation.

The Arbitration Rule’s effective date is 60 days following its publication in the Federal Register (which is imminent), and it applies to contracts entered into more than 180 days after that.

Cutting through the hyperbole that the Arbitration Rule protects consumers from “unfairness” that would deny them “their day in court,” this Rule is in fact highly anti-consumer and harmful to innovation.  As Competitive Enterprise Senior Fellow John Berlau put it, in promulgating this Rule, “[t]he CFPB has disregarded vast data showing that arbitration more often compensates consumers for damages faster and grants them larger awards than do class action lawsuits. This regulation could have particularly harmful effects on FinTech innovations, such as peer-to-peer lending.”  Moreover, in a coauthored paper, Professors Jason Johnston of the University of Virginia Law School and Todd Zywicki of the Scalia Law School debunked a CFPB study that sought to justify the agency’s plans to issue the Arbitration Rule.  They concluded:

The CFPB’s [own] findings show that arbitration is relatively fair and successful at resolving a range of disputes between consumers and providers of consumer financial products, and that regulatory efforts to limit the use of arbitration will likely leave consumers worse off . . . .  Moreover, owing to flaws in the report’s design and a lack of information, the report should not be used as the basis for any legislative or regulatory proposal to limit the use of consumer arbitration.    

Unfortunately, the Arbitration Rule is just the latest of many costly regulatory outrages perpetrated by the CFPB, an unaccountable bureaucracy that offends the Constitution’s separation of powers and should be eliminated by Congress, as I explained in a 2016 Heritage Foundation report.

Legislative elimination of an agency, however, takes time.  Fortunately, in the near term, Congress can apply the Congressional Review Act (CRA) to prevent the Arbitration Rule from taking effect, and to block the CFPB from passing rules similar to it in the future.

As Heritage Senior Legal Fellow Paul Larkin has explained:

[The CRA is] Congress’s most recent effort to trim the excesses of the modern administrative state.  The act requires the executive branch to report every “rule” — a term that includes not only the regulations an agency promulgates, but also its interpretations of the agency’s governing laws — to the Senate and House of Representatives so that each chamber can schedule an up-or-down vote on the rule under the statute’s fast-track procedure.  The act was designed to enable Congress expeditiously to overturn agency regulations by avoiding the delays occasioned by the Senate’s filibuster rules and practices while also satisfying the [U.S. Constitution’s] Article I Bicameralism and Presentment requirements, which force the Congress and President to collaborate to enact, revise, or repeal a law.  Under the CRA, a joint resolution of disapproval signed into law by the President invalidates the rule and bars an agency from thereafter adopting any substantially similar rule absent a new act of Congress.

Although the CRA was almost never invoked before 2017, in recent months it has been used extensively as a tool by Congress and the Trump Administration to roll back specific manifestations Obama Administration regulatory overreach (for example, see here and here).

Application of the CRA to expunge the Arbitration Rule (and any future variations on it) would benefit consumers, financial services innovation, and the overall economy.  Senator Tom Cotton has already gotten the ball rolling to repeal that Rule.  Let us hope that Congress follows his lead and acts promptly.

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

“Houston, we have a problem.” It’s the most famous line from Apollo 13 and perhaps how most Republicans are feeling about their plans to repeal and replace Obamacare.

As repeal and replace has given way to tinker and punt, Congress should take a lesson from one of my favorite scenes from Apollo 13.

“We gotta find a way to make this, fit into the hole for this, using nothing but that.”

Let’s look at a way Congress can get rid of the individual mandate, lower prices, cover pre-existing conditions, and provide universal coverage, using the box of tools that we already have on the table.

Some ground rules

First ground rule: (Near) universal access to health insurance. It’s pretty clear that many, if not most Americans, believe that everyone should have health insurance. Some go so far as to call it a “basic human right.” This may be one of the biggest shifts in U.S. public opinion over time.

Second ground rule: Everything has a price, there’s no free lunch. If you want to add another essential benefit, premiums will go up. If you want community rating, young healthy people are going to subsidize older sicker people. If you want a lower deductible, you’ll pay a higher premium, as shown in the figure below all the plans available on Oregon’s ACA exchange in 2017. It shows that a $1,000 decrease in deductible is associated with almost $500 a year in additional premium payments. There’s no free lunch.

ACA-Oregon-Exchange-2017

Third ground rule: No new programs, no radical departures. Maybe Singapore has a better health insurance system. Maybe Canada’s is better. Switching to either system would be a radical departure from the tools we have to work with. This is America. This is Apollo 13. We gotta find a way to make this, fit into the hole for this, using nothing but that.

Private insurance

Employer and individual mandates: Gone. This would be a substantial change from the ACA, but is written into the Senate health insurance bill. The individual mandate is perhaps the most hated part of the ACA, but it was also the most important part Obamacare. Without the coverage mandate, much of the ACA falls apart, as we are seeing now.

Community rating, mandated benefits (aka “minimum essential benefit”), and pre-existing conditions. Sen. Ted Cruz has a brilliantly simple idea: As long as a health plan offers at least one ACA-compliant plan in a state, the plan would also be allowed to offer non-Obamacare-compliant plans in that state. In other words, every state would have at least one plan that checks all the Obamacare boxes of community rating, minimum essential benefits, and pre-existing conditions. If you like Obamacare, you can keep Obamacare. In addition, there could be hundreds of other plans for which consumers can pick each person’s unique situation of age, health status, and ability/willingness to pay. A single healthy 27-year-old would likely choose a plan that’s very different from a plan chosen by a family of four with 40-something parents and school aged children.

Allow—but don’t require—insurance to be bought and sold across state lines. I don’t know if this a big deal or not. Some folks on the right think this could be a panacea. Some folks on the left think this is terrible and would never work. Let’s find out. Some say insurance companies don’t want to sell policies across state lines. Some will, some won’t. Let’s find out, but it shouldn’t be illegal. No one is worse off by loosening a constraint.

Tax deduction for insurance premiums. Keep insurance premiums as a deductible expense for business: No change from current law. In addition, make insurance premiums deductible on individual taxes. This is a not-so-radical change from current law that allows deductions for medical expenses. If someone has employer-provided insurance, the business would be able deduct the share the company pays and the worker would be able to deduct the employee share of the premium from his or her personal taxes. Sure the deduction will reduce tax revenues, but the increase in private insurance coverage would reduce the costs of Medicaid and charity care.

These straightforward changes would preserve one or more ACA-compliant plan for those who want to pay Obamacare’s “silver prices,” allow for consumer choice across other plans, and result in premiums that more closely aligned with benefits chosen by consumers. Allowing individuals to deduct health insurance premiums is also a crucial step in fostering insurance portability.

Medicaid

Even with the changes in the private market, some consumers will find that they can’t afford or don’t want to pay the market price for private insurance. These people would automatically get moved into Medicaid. Those in poverty (or some X% of the poverty rate) would pay nothing and everyone else would be charged a “premium” based on ability to pay. A single mother in poverty would pay nothing for Medicaid coverage, but Elon Musk (if he chose this option) would pay the full price. A middle class family would pay something in between free and full-price. Yes, this is a pretty wide divergence from the original intent of Medicaid, but it’s a relatively modest change from the ACA’s expansion.

While the individual mandate goes away, anyone who does not buy insurance in the private market or is not covered by Medicare will be “mandated” to have Medicaid coverage. At the same time, it preserves consumer choice. That is, consumers have a choice of buying an ACA compliant plan, one of the hundreds of other private plans offered throughout the states, or enrolling in Medicaid.

Would the Medicaid rolls explode? Who knows?

The Census Bureau reports that 15 percent of adults and 40 percent of children currently are enrolled in Medicaid. Research published in the New England Journal of Medicine finds that 44 percent of people who were enrolled in the Medicaid under Obamacare qualified for Medicaid before the ACA.

With low cost private insurance alternatives to Medicaid, some consumers would likely choose the private plans over Medicaid coverage. Also, if Medicaid premiums increased with incomes, able-bodied and working adults would likely shift out of Medicaid to private coverage as the government plan loses its cost-competitiveness.

The cost sharing of income-based premiums means that Medicaid would become partially self supporting.

Opponents of Medicaid expansion claim that the program provides inferior service: fewer providers, lower quality, worse outcomes. If that’s true, then that’s a feature, not a bug. If consumers have to pay for their government insurance and that coverage is inferior, then consumers have an incentive to exit the Medicaid market and enter the private market. Medicaid becomes the insurer of last resort that it was intended to be.

A win-win

The coverage problem is solved. Every American would have health insurance.

Consumer choice is expanded. By allowing non-ACA-compliant plans, consumers can choose the insurance that fits their unique situation.

The individual mandate penalty is gone. Those who choose not to buy insurance would get placed into Medicaid. Higher income individuals would pay a portion of the Medicaid costs, but this isn’t a penalty for having no insurance, it’s the price of having insurance.

The pre-existing conditions problem is solved. Americans with pre-existing conditions would have a choice of at least two insurance options: At least one ACA-compliant plan in the private market and Medicaid.

This isn’t a perfect solution, it may not even be a good solution, but it’s a solution that’s better than what we’ve got and better than what Congress has come up with so far. And, it works with the box of tools that’s already been dumped on the table.

On July 1, the minimum wage will spike in several cities and states across the country. Portland, Oregon’s minimum wage will rise by $1.50 to $11.25 an hour. Los Angeles will also hike its minimum wage by $1.50 to $12 an hour. Recent research shows that these hikes will make low wage workers poorer.

A study supported and funded in part by the Seattle city government, was released this week, along with an NBER paper evaluating Seattle’s minimum wage increase to $13 an hour. The papers find that the increase to $13 an hour had significant negative impacts on employment and led to lower incomes for minimum wage workers.

The study is the first study of a very high minimum wage for a city. During the study period, Seattle’s minimum wage increased from what had been the nation’s highest state minimum wage to an even higher level. It is also unique in its use of administrative data that has much more detail than is usually available to economics researchers.

Conclusions from the research focusing on Seattle’s increase to $13 an hour are clear: The policy harms those it was designed to help.

  • A loss of more than 5,000 jobs and a 9 percent reduction in hours worked by those who retained their jobs.
  • Low-wage workers lost an average of $125 per month. The minimum wage has always been a terrible way to reduce poverty. In 2015 and 2016, I presented analysis to the Oregon Legislature indicating that incomes would decline with a steep increase in the minimum wage. The Seattle study provides evidence backing up that forecast.
  • Minimum wage supporters point to research from the 1990s that made headlines with its claims that minimum wage increases had no impact on restaurant employment. The authors of the Seattle study were able to replicate the results of these papers by using their own data and imposing the same limitations that the earlier researchers had faced. The Seattle study shows that those earlier papers’ findings were likely driven by their approach and data limitations. This is a big deal, and a novel research approach that gives strength to the Seattle study’s results.

Some inside baseball.

The Seattle Minimum Wage Study was supported and funded in part by the Seattle city government. It’s rare that policy makers go through any effort to measure the effectiveness of their policies, so Seattle should get some points for transparency.

Or not so transparent: The mayor of Seattle commissioned another study, by an advocacy group at Berkeley whose previous work on the minimum wage is uniformly in favor of hiking the minimum wage (they testified before the Oregon Legislature to cheerlead the state’s minimum wage increase). It should come as no surprise that the Berkeley group released its report several days before the city’s “official” study came out.

You might think to yourself, “OK, that’s Seattle. Seattle is different.”

But, maybe Seattle is not that different. In fact, maybe the negative impacts of high minimum wages are universal, as seen in another study that came out this week, this time from Denmark.

In Denmark the minimum wage jumps up by 40 percent when a worker turns 18. The Danish researchers found that this steep increase was associated with employment dropping by one-third, as seen in the chart below from the paper.

3564_KREINER-Fig1

Let’s look at what’s going to happen in Oregon. The state’s employment department estimates that about 301,000 jobs will be affected by the rate increase. With employment of almost 1.8 million, that means one in six workers will be affected by the steep hikes going into effect on July 1. That’s a big piece of the work force. By way of comparison, in the past when the minimum wage would increase by five or ten cents a year, only about six percent of the workforce was affected.

This is going to disproportionately affect youth employment. As noted in my testimony to the legislature, unemployment for Oregonians age 16 to 19 is 8.5 percentage points higher than the national average. This was not always the case. In the early 1990s, Oregon’s youth had roughly the same rate of unemployment as the U.S. as a whole. Then, as Oregon’s minimum wage rose relative to the federal minimum wage, Oregon’s youth unemployment worsened. Just this week, Multnomah County made a desperate plea for businesses to hire more youth as summer interns.

It has been suggested Oregon youth have traded education for work experience—in essence, they have opted to stay in high school or enroll in higher education instead of entering the workforce. The figure below shows, however, that youth unemployment has increased for both those enrolled in school and those who are not enrolled in school. The figure debunks the notion that education and employment are substitutes. In fact, the large number of students seeking work demonstrates many youth want employment while they further their education.

OregonYouthUnemployment

None of these results should be surprising. Minimum wage research is more than a hundred years old. Aside from the “mans bites dog” research from the 1990s, economists were broadly in agreement that higher minimum wages would be associated with reduced employment, especially among youth. The research published this week is groundbreaking in its data and methodology. At the same time, the results are unsurprising to anyone with any understanding of economics or experience running a business.