The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Rate this:

Continue Reading...

In my fifteen years as a law professor, I’ve become convinced that there’s a hole in the law school curriculum.  When it comes to regulation, we focus intently on the process of regulating and the interpretation of rules (see, e.g., typical administrative law and “leg/reg” courses), but we rarely teach students what, as a matter of substance, distinguishes a good regulation from a bad one.  That’s unfortunate, because lawyers often take the lead in crafting regulatory approaches. 

In the fall of 2017, I published a book seeking to fill this hole.  That book, How to Regulate: A Guide for Policymakers, is the inspiration for a symposium that will occur this Friday (Feb. 8) at the University of Missouri Law School.

The symposium, entitled Protecting the Public While Fostering Innovation and Entrepreneurship: First Principles for Optimal Regulation, will bring together policymakers and regulatory scholars who will go back to basics. Participants will consider two primary questions:

(1) How, as a substantive matter, should regulation be structured in particular areas? (Specifically, what regulatory approaches would be most likely to forbid the bad while chilling as little of the good as possible and while keeping administrative costs in check? In other words, what rules would minimize the sum of error and decision costs?), and

(2) What procedures would be most likely to generate such optimal rules?


The symposium webpage includes the schedule for the day (along with a button to Livestream the event), but here’s a quick overview.

I’ll set the stage by discussing the challenge policymakers face in trying to accomplish three goals simultaneously: ban bad instances of behavior, refrain from chilling good ones, and keep rules simple enough to be administrable.

We’ll then hear from a panel of experts about the principles that would best balance those competing concerns in their areas of expertise. Specifically:

  • Jerry Ellig (George Washington University; former chief economist of the FCC) will discuss telecommunications policy;
  • TOTM’s own Gus Hurwitz (Nebraska Law) will consider regulation of Internet platforms; and
  • Erika Lietzan (Mizzou Law) will examine the regulation of therapeutic drugs and medical devices.

Hopefully, we can identify some common threads among the substantive principles that should guide effective regulation in these disparate areas

Before we turn to consider regulatory procedures, we will hear from our keynote speaker, Commissioner Hester Peirce of the SEC. As The Economist recently reported, Commissioner Peirce has been making waves with her speeches, many of which have gone back to basics and asked why the government is intervening and whether it’s doing so in an optimal fashion.

Following Commissioner Peirce’s address, we will hear from the following panelists about how regulatory procedures should be structured in order to generate substantively optimal rules:

  • Bridget Dooling (George Washington University; former official in the White House Office of Information and Regulatory Affairs);
  • Ken Davis (former Deputy Attorney General of Virginia and member of the Federalist Society’s Regulatory Transparency Project);
  • James Broughel (Senior Fellow at the Mercatus Center; expert on state-level regulatory review procedures); and
  • Justin Smith (former counsel to Missouri governor; led the effort to streamline the Missouri regulatory code).

As you can see, this Friday is going to be a great day at Mizzou Law. If you’re close enough to join us in person, please come. Otherwise, please join us via Livestream.

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.


Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

khanstrathearn

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

Nevertheless, my lazy man’s small sample test provides some interesting—and I hope useful—information regarding Khan and Strathearn’s conclusions.

  1. From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.
  2. Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”
  3. In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.

 

Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.

Drug makers recently announced their 2019 price increases on over 250 prescription drugs. As examples, AbbVie Inc. increased the price of the world’s top-selling drug Humira by 6.2 percent, and Hikma Pharmaceuticals increased the price of blood-pressure medication Enalaprilat by more than 30 percent. Allergan reported an average increase across its portfolio of drugs of 3.5 percent; although the drug maker is keeping most of its prices the same, it raised the prices on 27 drugs by 9.5 percent and on another 24 drugs by 4.9 percent. Other large drug makers, such as Novartis and Pfizer, will announce increases later this month.

So far, the number of price increases is significantly lower than last year when drug makers increased prices on more than 400 drugs.  Moreover, on the drugs for which prices did increase, the average price increase of 6.3 percent is only about half of the average increase for drugs in 2018. Nevertheless, some commentators have expressed indignation and President Trump this week summoned advisors to the White House to discuss the increases.  However, commentators and the administration should keep in mind what the price increases actually mean and the numerous players that are responsible for increasing drug prices. 

First, it is critical to emphasize the difference between drug list prices and net prices.  The drug makers recently announced increases in the list, or “sticker” prices, for many drugs.  However, the list price is usually very different from the net price that most consumers and/or their health plans actually pay, which depends on negotiated discounts and rebates.  For example, whereas drug list prices increased by an average of 6.9 percent in 2017, net drug prices after discounts and rebates increased by only 1.9 percent. The differential between the growth in list prices and net prices has persisted for years.  In 2016 list prices increased by 9 percent but net prices increased by 3.2 percent; in 2015 list prices increased by 11.9 percent but net prices increased by 2.4 percent, and in 2014 list price increases peaked at 13.5 percent but net prices increased by only 4.3 percent.

For 2019, the list price increases for many drugs will actually translate into very small increases in the net prices that consumers actually pay.  In fact, drug maker Allergan has indicated that, despite its increase in list prices, the net prices that patients actually pay will remain about the same as last year.

One might wonder why drug makers would bother to increase list prices if there’s little to no change in net prices.  First, at least 40 percent of the American prescription drug market is subject to some form of federal price control.  As I’ve previously explained, because these federal price controls generally require percentage rebates off of average drug prices, drug makers have the incentive to set list prices higher in order to offset the mandated discounts that determine what patients pay.

Further, as I discuss in a recent Article, the rebate arrangements between drug makers and pharmacy benefit managers (PBMs) under many commercial health plans create strong incentives for drug makers to increase list prices. PBMs negotiate rebates from drug manufacturers in exchange for giving the manufacturers’ drugs preferred status on a health plan’s formulary.  However, because the rebates paid to PBMs are typically a percentage of a drug’s list price, drug makers are compelled to increase list prices in order to satisfy PBMs’ demands for higher rebates. Drug makers assert that they are pressured to increase drug list prices out of fear that, if they do not, PBMs will retaliate by dropping their drugs from the formularies. The value of rebates paid to PBMs has doubled since 2012, with drug makers now paying $150 billion annually.  These rebates have grown so large that, today, the drug makers that actually invest in drug innovation and bear the risk of drug failures receive only 39 percent of the total spending on drugs, while 42 percent of the spending goes to these pharmaceutical middlemen.

Although a portion of the increasing rebate dollars may eventually find its way to patients in the form of lower co-pays, many patients still suffer from the list prices increases.  The 29 million Americans without drug plan coverage pay more for their medications when list prices increase. Even patients with insurance typically have cost-sharing obligations that require them to pay 30 to 40 percent of list prices.  Moreover, insured patients within the deductible phase of their drug plan pay the entire higher list price until they meet their deductible.  Higher list prices jeopardize patients’ health as well as their finances; as out-of-pocket costs for drugs increase, patients are less likely to adhere to their medication routine and more likely to abandon their drug regimen altogether.

Policymakers must realize that the current system of government price controls and distortive rebates creates perverse incentives for drug makers to continue increasing drug list prices. Pointing the finger at drug companies alone for increasing prices does not represent the problem at hand.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Last week, Senator Orrin Hatch, Senator Thom Tillis, and Representative Bill Flores introduced the Hatch-Waxman Integrity Act of 2018 (HWIA) in both the Senate and the House of Representatives.  If enacted, the HWIA would help to ensure that the unbalanced inter partes review (IPR) process does not stifle innovation in the drug industry and jeopardize patients’ access to life-improving drugs.

Created under the America Invents Act of 2012, IPR is a new administrative pathway for challenging patents. It was, in large part, created to fix the problem of patent trolls in the IT industry; the trolls allegedly used questionable or “low quality” patents to extort profits from innovating companies.  IPR created an expedited pathway to challenge patents of dubious quality, thus making it easier for IT companies to invalidate low quality patents.

However, IPR is available for patents in any industry, not just the IT industry.  In the market for drugs, IPR offers an alternative to the litigation pathway that Congress created over three decades ago in the Hatch-Waxman Act. Although IPR seemingly fixed a problem that threatened innovation in the IT industry, it created a new problem that directly threatened innovation in the drug industry. I’ve previously published an article explaining why IPR jeopardizes drug innovation and consumers’ access to life-improving drugs. With Hatch-Waxman, Congress sought to achieve a delicate balance between stimulating innovation from brand drug companies, who hold patents, and facilitating market entry from generic drug companies, who challenge the patents.  However, IPR disrupts this balance as critical differences between IPR proceedings and Hatch-Waxman litigation clearly tilt the balance in the patent challengers’ favor. In fact, IPR has produced noticeably anti-patent results; patents are twice as likely to be found invalid in IPR challenges as they are in Hatch-Waxman litigation.

The Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts in Hatch-Waxman proceedings. In federal court, patents are presumed valid and challengers must prove each patent claim invalid by “clear and convincing evidence.” In IPR proceedings, no such presumption of validity applies and challengers must only prove patent claims invalid by the “preponderance of the evidence.”

Moreover, whereas patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement.  This has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific pre-filing settlement demands.  The lack of a standing requirement has also led to the  exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

Finally, patent owners are often forced into duplicative litigation in both IPR proceedings and federal court litigation, leading to persistent uncertainty about the validity of their patents.  Many patent challengers that are unsuccessful in invalidating a patent in district court may pursue subsequent IPR proceedings challenging the same patent, essentially giving patent challengers “two bites at the apple.”  And if the challenger prevails in the IPR proceedings (which is easier to do given the lower standard of proof), the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision.

The pro-challenger bias in IPR creates significant uncertainty for patent rights in the drug industry.  As an example, just last week patent claims for drugs generating $6.5 billion for drug company Sanofi were invalidated in an IPR proceeding.  Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And, if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

The HWIA, which applies only to the drug industry, is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It eliminates challengers’ ability to file duplicative claims in both federal court and through the IPR process. Instead, they must choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR (which is faster and provides certain pro-challenger provisions). In addition to eliminating generic challengers’ “second bite of the apple,” the HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.

Thus, if enacted, the HWIA would create incentives that reestablish Hatch-Waxman litigation as the standard pathway for generic challenges to brand patents.  Yet, it would preserve IPR proceedings as an option when speed of resolution is a primary concern.  Ultimately, it will restore balance to the drug industry to safeguard competition, innovation, and patients’ access to life-improving drugs.

“Our City has become a cesspool,” according Portland police union president, Daryl Turner. He was describing efforts to address the city’s large and growing homelessness crisis.

Portland Mayor Ted Wheeler defended the city’s approach, noting that every major city, “all the way up and down the west coast, in the Midwest, on the East Coast, and frankly, in virtually every large city in the world” has a problem with homelessness. Nevertheless, according to the Seattle Times, Portland is ranked among the 10 worst major cities in the U.S. for homelessness. Wheeler acknowledged, “the problem is getting worse.”

This week, the city’s Budget Office released a “performance report” for some of the city’s bureaus. One of the more eyepopping statistics is the number of homeless camps the city has cleaned up over the years.

PortlandHomelessCampCleanups

Keep in mind, Multnomah County reports there are 4,177 homeless residents in the entire county. But the city reports clearing more than 3,100 camps in one year. Clearly, the number of homeless in the city is much larger than reflected in the annual homeless counts.

The report makes a special note that, “As the number of clean‐ups has increased and program operations have stabilized, the total cost per clean‐up has decreased substantially as well.” Sounds like economies of scale.

Turns out, Budget Office’s simple graphic gives enough information to estimate the economies of scale in homeless camp cleanups. Yes, it’s kinda crappy data. (Could it really be the case that in two years in a row, the city cleaned up exactly the same number of camps at exactly the same cost?) Anyway data is data.

First we plot the total annual costs for cleanups. Of course it’s an awesome fit (R-squared of 0.97), but that’s what happens when you have three observations and two independent variables.

PortlandHomelessTC

Now that we have an estimate of the total cost function, we can plot the marginal cost curve (blue) and average cost curve (orange).

PortlandHomelessMCAC1

That looks like a textbook example of economies of scale: decreasing average cost. It also looks like a textbook example of natural monopoly: marginal cost lower than average cost over the relevant range of output.

What strikes me as curious is how low is the implied marginal cost of a homeless camp cleanup, as shown in the table below.

FY Camps TC AC MC
2014-15 139 $171,109 $1,231 $3,178
2015-16 139 $171,109 $1,231 $3,178
2016-17 571 $578,994 $1,014 $774
2017-18 3,122 $1,576,610 $505 $142

It is somewhat shocking that the marginal cost of an additional camp cleanup is only $142. The hourly wages for the cleanup crew alone would be way more than $142. Something seems fishy with the numbers the city is reporting.

My guess: The city is shifting some of the cleanup costs to other agencies, such as Multnomah County and/or the Oregon Department of Transportation. I also suspect the city is not fully accounting for the costs of the cleanups. And, I am almost certain the city is significantly under reporting how many homeless are living on Portland streets.