Archives For regulation

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Jacob Grier, (Freelance writer and spirits consultant in Portland, Oregon, and the author of The Rediscovery of Tobacco: Smoking, Vaping, and the Creative Destruction of the Cigarette).]

The COVID-19 pandemic and the shutdown of many public-facing businesses has resulted in many sudden shifts in demand for common goods. The demand for hand sanitizer has drastically increased for hospitals, businesses, and individuals. At the same time, demand for distilled spirits has fallen substantially, as the closure of bars, restaurants, and tasting rooms has cut craft distillers off from their primary buyers. Since ethanol is a key ingredient in both spirits and sanitizer, this situation presents an obvious opportunity for distillers to shift their production from the former to the latter. Hundreds of distilleries have made this transition, but it has not without obstacles. Some of these reflect a real scarcity of needed supplies, but other constraints have been externally imposed by government regulations and the tax code.

Producing sanitizer

The World Health Organization provides guidelines and recipes for locally producing hand sanitizer. The relevant formulation for distilleries calls for only four ingredients: high-proof ethanol (96%), hydrogen peroxide (3%), glycerol (98%), and sterile distilled or boiled water. Distilleries are well-positioned to produce or obtain ethanol and water. Glycerol is used in only small amounts and does not currently appear to be a substantial constraint on production. Hydrogen peroxide is harder to come by, but distilleries are adapting and cooperating to ensure supply. Skip Tognetti, owner of Letterpress Distilling in Seattle, Washington, reports that one local distiller obtained a drum of 34% hydrogen peroxide, which stretches a long way when diluted to a concentration of 3%. Local distillers have been sharing this drum so that they can all produce sanitizer.

Another constraint is finding containers in which to the put the finished product. Not all containers are suitable for holding high-proof alcoholic solutions, and supplies of those that are recommended for sanitizer are scarce. The fact that many of these bottles are produced in China has reportedly also limited the supply. Distillers are therefore having to get creative; Tognetti reports looking into shampoo bottles, and in Chicago distillers have re-purposed glass beer growlers. For informal channels, some distillers have allowed consumers to bring their own containers to fill with sanitizer for personal use. Food and Drug Administration labeling requirements have also prevented the use of travel-size bottles, since the bottles are too small to display the necessary information.

The raw materials for producing ethanol are also coming from some unexpected sources. Breweries are typically unable to produce alcohol at high enough proof for sanitizer, but multiple breweries in Chicago are donating beer that distilleries can bring up to the required purity. Beer giant Anheuser-Busch is also producing sanitizer with the ethanol removed from its alcohol-free beers.

In many cases, the sanitizer is donated or sold at low-cost to hospitals and other essential services, or to local consumers. Online donations have helped to fund some of these efforts, and at least one food and beverage testing lab has stepped up to offer free testing to breweries and distilleries producing sanitizer to ensure compliance with WHO guidelines. Distillers report that the regulatory landscape has been somewhat confusing in recent weeks, and posts in a Facebook group have provided advice for how to get through the FDA’s registration process. In general, distillers going through the process report that agencies have been responsive. Tom Burkleaux of New Deal Distilling in Portland, Oregon says he “had to do some mighty paperwork,” but that the FDA and the Oregon Board of Pharmacy were both quick to process applications, with responses coming in just a few hours or less.

In general, the redirection of craft distilleries to producing hand sanitizer is an example of private businesses responding to market signals and the evident challenges of the health crisis to produce much-needed goods; in some cases, sanitizer represents one of their only sources of revenue during the shutdown, providing a lifeline for small businesses. The Distilled Spirits Council currently lists nearly 600 distilleries making sanitizer in the United States.

There is one significant obstacle that has hindered the production of sanitizer, however: an FDA requirement that distilleries obtain extra ingredients to denature their alcohol.

Denaturing sanitizer

According to the WHO, the four ingredients mentioned above are all that are needed to make sanitizer. In fact, WHO specifically notes that it in most circumstances it is inadvisable to add anything else: “it is not recommended to add any bittering agents to reduce the risk of ingestion of the handrubs” except in cases where there is a high probably of accidental ingestion. Further, “[…] there is no published information on the compatibility and deterrent potential of such chemicals when used in alcohol-based handrubs to discourage their abuse. It is important to note that such additives may make the products toxic and add to production costs.”

Denaturing agents are used to render alcohol either too bitter or too toxic to consume, deterring abuse by adults or accidental ingestion by children. In ordinary circumstances, there are valid reasons to denature sanitizer. In the current pandemic, however, the denaturing requirement is a significant bottleneck in production.

The federal Tax and Trade Bureau is the primary agency regulating alcohol production in the United States. The TTB took action early to encourage distilleries to produce sanitizer, officially releasing guidance on March 18 instructing them that they are free to commence production without prior authorization or formula approval, so long as they are making sanitizer in accordance with WHO guidelines. On March 23, the FDA issued its own emergency authorization of hand sanitizer production; unlike the WHO, FDA guidance does require the use of denaturants. As a result, on March 26 the TTB issued new guidance to be consistent with the FDA.

Under current rules, only sanitizer made with denatured alcohol is exempt from the federal excise tax on beverage alcohol. Federal excise taxes begin at $2.70 per gallon for low-volume distilleries and reach up to $13.50 per gallon, significantly increasing the cost of producing hand sanitizer; state excise taxes can raise these costs even higher.

More importantly, denaturing agents are scarce. In a Twitter thread on March 25, Tognetti noted the difficulty of obtaining them:

To be clear, if I didn’t have to track down denaturing agents (there are several, but isopropyl alcohol is the most common), I could turn out 200 gallons of finished hand sanitizer TODAY.

(As an additional concern, the Distilled Spirits Council notes that the extremely bitter or toxic nature of denaturing agents may impose additional costs on distillers given the need to thoroughly cleanse them from their equipment.)

Congress attempted to address these concerns in the CARES Act, the coronavirus relief package. Section 2308 explicitly waives the federal excise tax on distilled spirits used for the production of sanitizer, however it leaves the formula specification in the hands of the FDA. Unless the agency revises its guidance, production in the US will be constrained by the requirement to add denaturing agents to the plentiful supply of ethanol, or distilleries will risk being targeted with enforcement actions if they produce perfectly usable sanitizer without denaturing their alcohol.

Local distilleries provide agile production capacity

In recent days, larger spirits producers including Pernod-Ricard, Diageo, and Bacardi have announced plans to produce sanitizer. Given their resources and economies of scale, they may end up taking over a significant part of the market. Yet small, local distilleries have displayed the agility necessary to rapidly shift production. It’s worth noting that many of these distilleries did not exist until fairly recently. According to the American Craft Spirits Association, there were fewer than 100 craft distilleries operating in the United States in 2005. By 2018, there were more than 1,800. This growth is the result of changing consumer interests, but also the liberalization of state and local laws to permit distilleries and tasting rooms. That many of these distilleries have the capacity to produce sanitizer in a time of emergency is a welcome, if unintended, consequence of this liberalization.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Julian Morris, (Director of Innovation Policy, ICLE).]

SARS-CoV2, the virus that causes COVID-19, is now widespread in the population in many countries, including the US, UK, Australia, Iran, and many European countries. Its prevalence in other regions, such as South Asia, much of South America, and Africa, is relatively unknown. The failure to contain the virus early on has meant that more aggressive measures are now necessary in order to avoid overwhelming healthcare systems, which would cause unacceptable levels of mortality. (Sadly, Italy’s health system has already been overwhelmed, forcing medical practitioners to engage in the most awful triage decisions.) Many jurisdictions, ranging from cities to entire countries, have chosen to implement mandatory lockdowns. These will likely have the desired effect of slowing transmission in the short term, but they cannot be maintained indefinitely. The challenge going forward is how to contain the spread of the virus without destroying the economy. 

In this post I will outline the elements of a proposal that I hope might do that. (I’ve been working on this for about a week and in the meantime some of the ideas have been advanced by others. E.g. this and this. Great minds clearly think alike.)

1. Identify those who have had COVID-19 and have recovered — and allow them to go back to work

While there are some reports of people who have had COVID-19 becoming reinfected, this seems to be very rare (a recent primate study implies reinfection is impossible) and the alleged cases may have been a result of false negative tests followed by relapse by patients. The general presumption is that having the disease is likely to confer immunity for several months at least. Moreover, people with immunity who no longer show symptoms of the disease are very unlikely to transmit the disease. Allowing those people to go back to work will lessen the burden of the lockdown without appreciably increasing the risk of infection

One group of such people is readily identifiable, though small: Those who tested positive for COVID-19 and subsequently recovered. Those people should be permitted to go back to work immediately.

2. Where possible, test, trace, treat, isolate

The town of Vo in Northern Italy, the site of the first death in the country from COVID-19, appears to have stopped the disease from spreading in about three weeks. It did so through a combination of universal testing, two weeks of strict lockdown, and quarantine of cases.  Could this be replicated elsewhere? 

Vo has a population of 3,300, so universal testing was not the gargantuan exercise it would be in, say, the continental US. Some larger jurisdictions have had similar success without resorting to universal testing and lockdown. South Korea managed to contain the spread of SARS-CoV2 relatively quickly through a combination of: social distancing (including closing schools and restricting large gatherings), testing anyone who had COVID-19 symptoms (and increasingly those without symptoms), tracing and testing of those who had contact with those symptomatic individuals, treating those with severe symptoms, quarantining those who tested positive but had no or only mild symptoms (the quarantine was monitored using a phone app and strictly enforced), and publicly sharing detailed information about the known incidence of the virus. 

A study of 181 cases in China published in the Annals of Internal Medicine found that the mean incubation period for COVID-19 is just over 5 days and only about 1 in 100 cases take longer than 14 days. By implication, if people have been strictly following the guidelines on avoiding contact with others, washing/sanitizing hands, sanitizing other objects, and avoiding hand-to-face contact, it should be possible, after two weeks of lockdown, to identify the vast majority of people who are not infected by testing everyone for the presence of SARS-CoV2 itself.

But that’s a series of big ifs. Since it takes a few days for the virus to replicate in the body to the point at which it is detectable, people who have recently been infected might test negative. Also, it is unlikely to be feasible logistically to test a significant proportion of the population for SARS-CoV2 in a short period of time. Existing tests require the use of RT-PCR, which is expensive and time consuming, not least because it can only be done at a lab, and while the capacity for such tests is increasing, it is likely around 50,000 per day in the entire US. 

Test, trace, treat, and isolate may be a feasible option for towns and even cities that currently have relatively low incidence of SARS-CoV2. However, given the lethargic progress of testing in places such as the US, UK and India, and hence poor existing knowledge of the extent of infection, it will not be a universal panacea.

3. Test as many people as possible for the presence of antibodies to SARS-CoV2

Outside those few places that have dramatically ramped up testing, it is likely that many more people have had COVID-19 than have been tested, either because they were asymptomatic or because they did not require clinical attention. Many, perhaps most of those people will no longer have the virus in their system but they should still have antibodies (indicating immunity). In order to identify those people, there should be widespread testing for antibodies to SARS-CoV2. 

Antibody tests are inexpensive, quick, and some can be done at home with minimal assistance. Numerous such tests have already been produced or are in development (see the list here). For example, Chinese manufacturer Innovita has produced a test that appears to be effective; in a clinical trial of 447 patients, it identified the presence of antibodies to SARS-CoV2 in 87.3 % of clinically confirmed cases of COVID-19 (i.e. there were approximately 13% false negatives) but zero false positives. Innovita’s test was approved by China’s equivalent of the FDA and has been used widely there. 

Scanwell Health, a San Francisco-based startup, has an exclusive license to produce Innovita’s test in the U.S. and has already begun the process for obtaining approval from the US FDA under its Emergency Use Authorization. Scanwell estimates that the total cost of the test, including overnight shipping of the kit and support from a doctor or nurse practitioner from Lemonaid Health, will be around $70. One downside to Scanwell Health’s offering, however, is that it expects it to take 6-8 weeks to begin shipping testing kits once it receives authorization from the FDA

So far, the FDA has approved at least one SARS-CoV2 antibody test, produced by Aytu Bioscience in Colorado. But Aytu’s test is designed for use by physicians, not at home. In Europe, at least one antibody test, produced by German company PharmACT, is already available. (That test has similar characteristics to Innovita’s.) Another has been approved by the MHRA in the UK for physician use and is awaiting approval for home use; the UK government has ordered 3.5 million of these tests, with the aim of distributing 250,000 per day by the end of April. 

Unfortunately, some people who have antibodies to SARS-CoV2 will also still be infectious. However, because different antibodies develop at different times during the course of infection, it may be possible to distinguish those who are still infectious from those who are no longer infectious. Specifically, immunoglobulin (Ig) M is present in larger amounts while the viral load is still present, while IgG is present in larger amounts later on (see e.g. this and the figure below). So, by testing for the presence of both IgM and IgG it should be possible to identify a large proportion of those who have had COVID-19 but are no longer infectious. (The currently available antibody tests result in about 13 percent false negatives, making them inappropriate as a means of screening out those who do not have COVID-19. But they produce zero false positives, making them ideal for identifying those who definitely have or have had COVID-19). In essence, people whose IgG test is positive but IgM test is negative can then go back to work. In addition, people who have had COVID-19 symptoms, are now symptom-free, and test positive for antibodies, should be allowed to go back to work.

4. Test for SARS-Cov2 among those who test negative for antibodies — and ensure that everyone who tests positive remains in isolation

Those people who test negative for SARS-CoV2 using the quick antibody immunoassay, as well as those who are positive for both IgG and IgM (indicating that they may still be infectious) should then be tested for SARS-CoV2 using the RT-PCR test described above. And those who test negative for SARS-CoV2 should then be permitted to go back to work. But those who test positive should be required to remain in isolation— and seek treatment if necessary.

5. Repeat steps 3 and 4 until nobody tests positive for COVID-19

By repeating steps 3 and 4, it should be possible gradually to enable the vast majority of the population to return to work, and thence to a life of greater normalcy, within a matter of weeks.

6. Some (possibly rather large) caveats

All of this relies on: (a) the ability rapidly to expand testing and (b) widespread compliance with isolation requirements. Neither of these conditions is by any means guaranteed, not least because the rules effectively discriminate in favor of people who have had COVID-19, which may create a perverse incentive to violate not only the isolation requirements but all the recommended hygiene practices — and thereby intentionally become infected with SARS-CoV2 on the presumption that they will then be able to go back to work sooner than otherwise. So, before this is rolled out, it is important to ensure that there will be widespread testing for COVID-19 in a timeframe shorter than the likely total time for contracting and recovering from COVID-19.

In addition, if test results are to be used as a means of establishing a person’s ability to travel and work while others are still under lockdown, it is important that there  be a means of verifying the status of individuals. That might be possible through the use of an app, for example; such an app might also provide policymakers to make better resources allocation decisions too. 

Also, at-risk individuals should be strongly advised to remain in isolation until there is no further evidence of community transmission. 

7. The Mechanics of Testing

Given that there are not currently sufficient tests available for everyone to be tested in most locations, one obvious question is: who should be tested? As noted above, it makes sense initially to target those who have had COVID-19 symptoms and have recovered. Since only those people who have had such symptoms—and possibly their physician if they presented with their symptoms—will know who they are, this will rely largely on trust. (It’s possible that self-reporting apps could help.) 

But it may make sense initially to target tests more narrowly. The UK is initially targeting the antibody detection kits to healthcare and other key workers—people who are essential to the continued functioning of the country. That makes sense and could easily be applied in other places. 

Assuming that key workers can be supplied with antibody detection kits quickly, distribution should then be opened up more widely. No doubt insurance companies will be making decisions about the purchase of testing kits. Ideally, however, individuals should be able to buy kits such as Scanwell’s without going through a bureaucratic process, whether that be their insurance company or the NHS. And vendors should be free to price kits as they see fit, without worrying about the prospect of being subject to price caps such as those imposed by Medicaid or the VA, which have the perverse effect of incentivising vendors to increase the list price. Finally, in order to increase the supply of tests as rapidly as possible, regulatory agencies should be encouraged to issue emergency approvals as quickly as possible. Having more manufacturers with a diverse array of tests available will increase access to testing more quickly and likely lead to more accurate testing too. Agencies such as the FDA should see this as their absolute priority right now. If the Mayo clinic can compress 6 months’ product development into a month, the FDA can surely do its review far more quickly too. Lives—and the economy—depend upon it.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Brent Skorup, (Senior Research Fellow, Mercatus Center, George Mason University).]

One of the most visible economic effects of the COVID-19 spread is the decrease in airline customers. Alec Stapp alerted me to the recent outrage over “ghost flights,” where airlines fly nearly empty planes to maintain their “slots.” 

The airline industry is unfortunately in economic freefall as governments prohibit and travelers pull back on air travel. When the health and industry crises pass, lawmakers will have an opportunity to evaluate the mistakes of the past when it comes to airport congestion and airspace design.

This issue of ghost flights pops up occasionally and offers a lesson in the problems with government rationing of public resources. In this case, the public resource are airport slots: designated times, say, 15 or 30 minutes, a plane may takeoff or land at an airport. (Last week US and EU regulators temporarily waived the use-it-or-lose it rule for slots to mitigate the embarrassing cost and environmental damage caused by forcing airlines to fly empty planes.)

The slots at major hubs at peak times of day are extremely scarce–there’s only so many hours in a day. Today, slot assignment are administratively rationed in a way that favors large, incumbent airlines. As the Wall Street Journal summarized last year,

For decades, airlines have largely divided runway access between themselves at twice-yearly meetings run by the IATA (an airline trade group).

Airport slots are property. They’re valuable. They can be defined, partitioned, leased, put up as collateral, and, in the US, they can be sold and transferred within or between airports.

You just can’t call slots property. Many lawmakers, regulators, and airline representatives refuse to acknowledge the obvious. Stating that slots are valuable public property would make clear the anticompetitive waste that the 40-year slot assignment experiment generates. 

Like many government programs, the slot rationing began in the US as a temporary program decades ago as a response to congestion at New York airports. Slots are currently used to ration access at LGA, JFK, and DCA. And while they don’t use formal slot rationing, the FAA also rations access at four other busy airports: ORD, Newark, LAX, and SFO.

Fortunately, cracks are starting to form. In 2008, at the tailend of the Bush administration, the FAA proposed to auction some slots in New York City’s three airports. The plan was delayed by litigation from incumbent airlines and an adverse finding from the GAO. With a change in administration, the Obama FAA rescinded the plan in 2009.

Before the Obama FAA recission, the mask slipped a bit in the GAO’s criticism of the slot auction plan: 

FAA’s argument that slots are property proves too much—it suggests that the agency has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

Gulp.

Though the GAO helped scuttle the plan, the damage has been done. The idea has now entered public policy discourse: giving away valuable public property is precisely what’s going on. 

The implicit was made explicit in 2011 when, despite spiking the Bush FAA plan, the Obama FAA auctioned two dozen high-value slots. (The reversal and lack of controversy is puzzling to me.) Delta and US Airways wanted to swap some 160 slots at New York and DC airports. As a condition of the mega-swap, the Obama FAA required they divest 24 slots at those popular airports, which the agency auctioned to new entrants. Seven low-fare airlines bid in the auction and Jetblue and WestJet won the divested slots, paying about $90 million combined

The older fictions are rapidly eroding. There is an active secondary market in slots in some nations and when prices are released it becomes clear that the legacy rationing amounts to public property setasides to insiders. In 2016 it leaked, for instance, that an airline paid £58 million for a pair of take-off and landing slots at Heathrow. Other slot sales are in the tens of millions of dollars.

The 2011 FAA auctions and the loosening of rules globally around slot sales signal that the competition benefits from slot markets are too obvious to ignore. Competition from new entry drives down airfare and increases the number of flights.

For instance, a few months ago researchers used a booking app to scour 50 trillion flight itineraries to see new entrants’ effect on airline ticket prices between 2017 and 2019. As the Wall Street Journal reported, the entry of a low-fare carrier reduced ticket prices by 17% on average. The bigger effect was on output–new entry led to a 30% YoY increase in flights.

It’s becoming harder to justify the legacy view, which allow incumbent airlines to dominate the slot allocations via international conferences and national regulations that require “grandfather” slot usage. In a separate article last year, the Wall Street Journal reported that airlines are reluctantly ceding more power to airports in the assignment of slots. This is another signal in the long-running tug-of-war between airports and airlines. Airports generally want to open slots for new competitors–incumbent airlines do not.

The reason for the change of heart? The Journal says,

Airlines and airports reached the deal in part because of concerns governments should start to sell slots.

Gulp. Ghost flights are a government failure but a rational response to governments withholding the benefits of property from airlines. The slot rationing system encourages flying uneconomical flights, smaller planes, and excess carbon emissions. The COVID-19 crisis allowed the public a glimpse at the dysfunctional system. It won’t be easy, but aviation regulators worldwide need to assess slots policy and airspace access before the administrative rationing system spreads to the emerging urban air mobility and drone delivery markets.

The following is the first in a new blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available at https://truthonthemarket.com/symposia/the-law-economics-of-the-covid-19-pandemic/.

Continue Reading...

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

In a recent NY Times opinion piece, Tim Wu, like Elizabeth Holmes, lionizes Steve Jobs. Like Jobs with the iPod and iPhone, and Holmes with the Theranos Edison machine, Wu tells us we must simplify the public’s experience of complex policy into a simple box with an intuitive interface. In this spirit he argues that “what the public wants from government is help with complexity,” such that “[t]his generation of progressives … must accept that simplicity and popularity are not a dumbing-down of policy.”

This argument provides remarkable insight into the complexity problems of progressive thought. Three of these are taken up below: the mismatch of comparing the work of the government to the success of Jobs; the mismatch between Wu’s telling of and Jobs’s actual success; and the latent hypocrisy in Wu’s “simplicity for me, complexity for thee” argument.

Contra Wu’s argument, we need politicians that embrace and lay bare the complexity of policy issues. Too much of our political moment is dominated by demagogues on every side of policy debates offering simple solutions to simplified accounts of complex policy issues. We need public intellectuals, and hopefully politicians as well, to make the case for complexity. Our problems are complex and solutions to them hard (and sometimes unavailing). Without leaders willing to steer into complexity, we can never have a polity able to address complexity.

I. “Good enough for government work” isn’t good enough for Jobs

As an initial matter, there is a great deal of wisdom in Wu’s recognition that the public doesn’t want complexity. As I said at the annual Silicon Flatirons conference in February, consumers don’t want a VCR with lots of dials and knobs that let them control lots of specific features—they just want the damn thing to work. And as that example is meant to highlight, once it does work, most consumers are happy to leave well enough alone (as demonstrated by millions of clocks that would continue to blink 12:00 if VCRs weren’t so 1990s).

Where Wu goes wrong, though, is that he fails to recognize that despite this desire for simplicity, for two decades VCR manufacturers designed and sold VCRs with clocks that were never set—a persistent blinking to constantly remind consumers of their own inadequacies. Had the manufacturers had any insight into the consumer desire for simplicity, all those clocks would have been used for something—anything—other than a reminder that consumers didn’t know how to set them. (Though, to their credit, these devices were designed to operate as most consumers desired without imposing any need to set the clock upon them—a model of simplicity in basic operation that allows consumers to opt-in to a more complex experience.)

If the government were populated by visionaries like Jobs, Wu’s prescription would be wise. But Jobs was a once-in-a-generation thinker. No one in a generation of VCR designers had the insight to design a VCR without a clock (or at least a clock that didn’t blink in a constant reminder of the owner’s inability to set it). And similarly few among the ranks of policy designers are likely to have his abilities, either. On the other hand, the public loves the promise of easy solutions to complex problems. Charlatans and demagogues who would cast themselves in his image, like Holmes did with Theranos, can find government posts in abundance.

Of course, in his paean to offering the public less choice, Wu, himself an oftentime designer of government policy, compares the art of policy design to the work of Jobs—not of Holmes. But where he promises a government run in the manner of Apple, he would more likely give us one more in the mold of Theranos.

There is a more pernicious side to Wu’s argument. He speaks of respect for the public, arguing that “Real respect for the public involves appreciating what the public actually wants and needs,” and that “They would prefer that the government solve problems for them.” Another aspect of respect for the public is recognizing their fundamental competence—that progressive policy experts are not the only ones who are able to understand and address complexity. Most people never set their VCRs’ clocks because they felt no need to, not because they were unable to figure out how to do so. Most people choose not to master the intricacies of public policy. But this is not because the progressive expert class is uniquely able to do so. It is—as Wu notes, that most people do not have the unlimited time or attention that would be needed to do so—time and attention that is afforded to him by his social class.

Wu’s assertion that the public “would prefer that the government solve problems for them” carries echoes of Louis Brandeis, who famously said of consumers that they were “servile, self-indulgent, indolent, ignorant.” Such a view naturally gives rise to Wu’s assumption that the public wants the government to solve problems for them. It assumes that they are unable to solve those problems on their own.

But what Brandeis and progressives cast in his mold attribute to servile indolence is more often a reflection that hoi polloi simply do not have the same concerns as Wu’s progressive expert class. If they had the time to care about the issues Wu would devote his government to, they could likely address them on their own. The fact that they don’t is less a reflection of the public’s ability than of its priorities.

II. Jobs had no monopoly on simplicity

There is another aspect to Wu’s appeal to simplicity in design that is, again, captured well in his invocation of Steve Jobs. Jobs was exceptionally successful with his minimalist, simple designs. He made a fortune for himself and more for Apple. His ideas made Apple one of the most successful companies, with one of the largest user bases, in the history of the world.

Yet many people hate Apple products. Some of these users prefer to have more complex, customizable devices—perhaps because they have particularized needs or perhaps simply because they enjoy having that additional control over how their devices operate and the feeling of ownership that that brings. Some users might dislike Apple products because the interface that is “intuitive” to millions of others is not at all intuitive to them. As trivial as it sounds, most PC users are accustomed to two-button mice—transitioning to Apple’s one-button mouse is exceptionally  discomfitting for many of these users. (In fairness, the one-button mouse design used by Apple products is not attributable to Steve Jobs.) And other users still might prefer devices that are simple in other ways, so are drawn to other products that better cater to their precise needs.

Apple has, perhaps, experienced periods of market dominance with specific products. But this has never been durable—Apple has always faced competition. And this has ensured that those parts of the public that were not well-served by Jobs’s design choices were not bound to use them—they always had alternatives.

Indeed, that is the redeeming aspect of the Theranos story: the market did what it was supposed to. While too many consumers may have been harmed by Holmes’ charlatan business practices, the reality is that once she was forced to bring the company’s product to market it was quickly outed as a failure.

This is how the market works. Companies that design good products, like Apple, are rewarded; other companies then step in to compete by offering yet better products or by addressing other segments of the market. Some of those companies succeed; most, like Theranos, fail.

This dynamic simply does not exist with government. Government is a policy monopolist. A simplified, streamlined, policy that effectively serves half the population does not effectively serve the other half. There is no alternative government that will offer competing policy designs. And to the extent that a given policy serves part of the public better than others, it creates winners and losers.

Of course, the right response to the inadequacy of Wu’s call for more, less complex policy is not that we need more, more complex policy. Rather, it’s that we need less policy—at least policy being dictated and implemented by the government. This is one of the stalwart arguments we free market and classical liberal types offer in favor of market economies: they are able to offer a wider range of goods and services that better cater to a wider range of needs of a wider range of people than the government. The reason policy grows complex is because it is trying to address complex problems; and when it fails to address those problems on a first cut, the solution is more often than not to build “patch” fixes on top of the failed policies. The result is an ever-growing book of rules bound together with voluminous “kludges” that is forever out-of-step with the changing realities of a complex, dynamic world.

The solution to so much complexity is not to sweep it under the carpet in the interest of offering simpler, but only partial, solutions catered to the needs of an anointed subset of the public. The solution is to find better ways to address those complex problems—and often times it’s simply the case that the market is better suited to such solutions.

III. A complexity: What does Wu think of consumer protection?

There is a final, and perhaps most troubling, aspect to Wu’s argument. He argues that respect for the public does not require “offering complete transparency and a multiplicity of choices.” Yet that is what he demands of business. As an academic and government official, Wu has been a loud and consistent consumer protection advocate, arguing that consumers are harmed when firms fail to provide transparency and choice—and that the government must use its coercive power to ensure that they do so.

Wu derives his insight that simpler-design-can-be-better-design from the success of Jobs—and recognizes more broadly that the consumer experience of products of the technological revolution (perhaps one could even call it the tech industry) is much better today because of this simplicity than it was in earlier times. Consumers, in other words, can be better off with firms that offer less transparency and choice. This, of course, is intuitive when one recognizes (as Wu has) that time and attention are among the scarcest of resources.

Steve Jobs and Elizabeth Holmes both understood that the avoidance of complexity and minimizing of choices are hallmarks of good design. Jobs built an empire around this; Holmes cost investors hundreds of millions of dollars in her failed pursuit. But while Holmes failed where Jobs succeeded, her failure was not tragic: Theranos was never the only medical testing laboratory in the market and, indeed, was never more than a bit player in that market. For every Apple that thrives, the marketplace erases a hundred Theranoses. But we do not have a market of governments. Wu’s call for policy to be more like Apple is a call for most government policy to fail like Theranos. Perhaps where the challenge is to do more complex policy simply, the simpler solution is to do less, but simpler, policy well.

Conclusion

We need less dumbing down of complex policy in the interest of simplicity; and we need leaders who are able to make citizens comfortable with and understanding of complexity. Wu is right that good policy need not be complex. But the lesson from that is not that complex policy should be made simple. Rather, the lesson is that policy that cannot be made simple may not be good policy after all.

regulation-v41n3-coverCalm Down about Common Ownership” is the title of a piece Thom Lambert and I published in the Fall 2018 issue of Regulation, which just hit online. The article is a condensed version our recent paper, “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” In short, we argue that concern about common ownership lacks a theoretically sound foundation and is built upon faulty empirical support. We also explain why proposed “fixes” would do more harm than good.

Over the past several weeks we wrote a series of blog posts here that summarize or expand upon different parts of our argument. To pull them all into one place:

This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.

In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.

Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.

But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.

The longstanding, global transition from telecom regulation to antitrust enforcement

The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).

On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.

While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.

To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.

Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that

[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.

Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:

The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)

To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.

It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.

And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.

Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.

How antitrust oversight of telecom markets has been implemented around the globe

In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.

Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.

Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.

Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.

A few brief case studies will illuminate these and other reforms:

The Netherlands

In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).

The Netherlands also claims that the ACM’s ex post approach is better able to adapt to “technological developments, dynamic markets, and market trends”:

The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.

The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.

Spain

In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures

a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.

Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”

Denmark

In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.

Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.

New Zealand

The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”

Advantages identified by other organizations

The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.

At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.

Contrasting approaches to net neutrality in the EU and New Zealand

Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.

BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.

Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”

The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.

The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.

In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.

Currently, there is broad consensus among stakeholders, including a local content providers and networking equipment manufacturers, that there is no need for ex ante regulation of net neutrality. Wholesale ISP, Chorus, states, for example, that “in any event, the United States’ transparency and non-interference requirements [from the 2015 OIO] are arguably covered by the TCF Code disclosure rules and the provisions of the Commerce Act.”

The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.

In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.

In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was

unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.

Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.

The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.

Conclusion

Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.

On September 28, the American Antitrust Institute released a report (“AAI Report”) on the state of U.S. antitrust policy, provocatively entitled “A National Competition Policy:  Unpacking the Problem of Declining Competition and Setting Priorities for Moving Forward.”  Although the AAI Report contains some valuable suggestions, in important ways it reminds one of the drunkard who seeks his (or her) lost key under the nearest lamppost.  What it requires is greater sobriety and a broader vision of the problems that beset the American economy.

The AAI Report begins by asserting that “[n]ot since the first federal antitrust law was enacted over 120 years ago has there been the level of public concern over the concentration of economic and political power that we see today.”  Well, maybe, although I for one am not convinced.  The paper then states that “competition is now on the front pages, as concerns over rising concentration, extraordinary profits accruing to the top slice of corporations, slowing innovation, and widening income and wealth inequality have galvanized attention.”  It then goes on to call for a more aggressive federal antitrust enforcement policy, with particular attention paid to concentrated markets.  The implicit message is that dedicated antitrust enforcers during the Obama Administration, led by Federal Trade Commission Chairs Jonathan Leibowitz and Edith Ramirez, and Antitrust Division chiefs Christine Varney, Bill Baer, and Renata Hesse (Acting) have been laggard or asleep at the switch.  But where is the evidence for this?  I am unaware of any and the AAI doesn’t say.  Indeed, federal antitrust officials in the Obama Administration consistently have called for tough enforcement, and they have actively pursued vertical as well as horizontal conduct cases and novel theories of IP-antitrust liability.  Thus, the AAI Report’s contention that antitrust needs to be “reinvigorated” is unconvincing.

The AAI Report highlights three “symptoms” of declining competition:  (1) rising concentration, (2) higher profits to the few and slowing rates of start-up activity, and (3) widening income and wealth inequality.  But these concerns are not something that antitrust policy is designed to address.  Mergers that threaten to harm competition are within the purview of antitrust, but modern antitrust rightly focuses on the likely effects of such mergers, not on the mere fact that they may increase concentration.  Furthermore, antitrust assesses the effects of business agreements on the competitive process.  Antitrust does not ask whether business arrangements yield “unacceptably” high profits, or “overly low” rates of business formation, or “unacceptable” wealth and income inequality.  Indeed, antitrust is not well equipped to address such questions, nor does it possess the tools to “solve” them (even assuming they need to be solved).

In short, if American competition is indeed declining based on the symptoms flagged by the AAI Report, the key to the solution will not be found by searching under the antitrust policy lamppost for illumination.  Rather, a more thorough search, with the help of “common sense” flashlights, is warranted.

The search outside the antitrust spotlight is not, however, a difficult one.  Finding the explanation for lagging competitive conditions in the United States requires no great policy legerdemain, because sound published research already provides the answer.  And that answer centers on government failures, not private sector abuses.

Consider overregulation.  In its annual Red Tape Rising reports (see here for the latest one), the Heritage Foundation has documented the growing burden of federal regulation on the American economy.  Overregulation acts like an implicit tax on businesses and disincentivizes business start-ups.  Moreover, as regulatory requirements grow in complexity and burdensomeness, they increasingly place a premium on large size – only relatively larger businesses can better afford the fixed costs needed to establish regulatory compliance department than their smaller rivals.  Heritage Foundation Scholar Norbert Michel summarizes this phenomenon in his article Dodd-Frank and Glass-Steagall – ‘Consumer Protection for Billionaires’:

Even when it’s not by nefarious design, we end up with rules that favor the largest/best-funded firms over their smaller/less-well-funded competitors. Put differently, our massive regulatory state ends up keeping large firms’ competitors at bay.  The more detailed regulators try to be, the more complex the rules become. And the more complex the rules become, the smaller the number of people who really care. Hence, more complicated rules and regulations serve to protect existing firms from competition more than simple ones. All of this means consumers lose. They pay higher prices, they have fewer choices of financial products and services, and they pretty much end up with the same level of protection they’d have with a smaller regulatory state.

What’s worse, some of the most onerous regulatory schemes are explicitly designed to favor large competitors over small ones.  A prime example is financial services regulation, and, in particular, the rules adopted pursuant to the 2010 Dodd-Frank Act (other examples could readily be provided).  As a Heritage Foundation report explains (footnote citations omitted):

The [Dodd-Frank] act was largely intended to reduce the risk of a major bank failure, but the regulatory burden is crippling community banks (which played little role in the financial crisis). According to Harvard University researchers Marshall Lux and Robert Greene, small banks’ share of U.S. commercial banking assets declined nearly twice as much since the second quarter of 2010—around the time of Dodd–Frank’s passage—as occurred between 2006 and 2010. Their share currently stands at just 22 percent, down from 41 percent in 1994.

The increased consolidation rate is driven by regulatory economies of scale—larger banks are better suited to handle increased regulatory burdens than are smaller banks, causing the average costs of community banks to rise. The decline in small bank assets spells trouble for their primary customer base—small business loans and those seeking residential mortgages.

Ironically, Dodd–Frank proponents pushed for the law as necessary to rein in the big banks and Wall Street. In fact, the regulations are giving the largest companies a competitive advantage over smaller enterprises—the opposite outcome sought by Senator Christopher Dodd (D–CT), Representative Barney Frank (D–MA), and their allies. As Goldman Sachs CEO Lloyd Blankfein recently explained: “More intense regulatory and technology requirements have raised the barriers to entry higher than at any other time in modern history. This is an expensive business to be in, if you don’t have the market share in scale.

In sum, as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.

More generally, as Heritage Foundation President Jim DeMint and Heritage Action for America CEO Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations, including special tax preferences. Those special preferences undermine competition on the merits by firms that lack insider status, to the public detriment.  Relatedly, the hideously complex system of American business taxation, which features the highest corporate tax rates in the developed world (which can better be manipulated by very large corporate players), depresses wages and is a serious drag on the American economy, as shown by Heritage Foundation scholars Curtis Dubay and David Burton.  In a similar vein, David Burton testified before Congress in 2015 on how the various excesses of the American regulatory state (including bad tax, health care, immigration, and other regulatory policies, combined with an overly costly legal system) undermine U.S. entrepreneurship (see here).

In other words, special subsidies, regulations, and tax and regulatory programs for the well-connected are part and parcel of crony capitalism, which (1) favors large businesses, tending to raise concentration; (2) confers higher profits on the well-connected while discouraging small business entrepreneurship; and (3) promotes income and wealth inequality, with the greatest returns going to the wealthiest government cronies who know best how to play the Washington “rent seeking game.”  Unfortunately, crony capitalism has grown like topsy during the Obama Administration.

Accordingly, I would counsel AAI to turn its scholarly gaze away from antitrust and toward the true source of the American competitive ailments it spotlights:  crony capitalism enabled by the growth of big government special interest programs and increasingly costly regulatory schemes.  Let’s see if AAI takes my advice.

Yesterday the Heritage Foundation published a Legal Memorandum, in which I explain the need for the reform of U.S. Food and Drug Administration (FDA) regulation, in order to promote path-breaking biopharmaceutical innovation.  Highlights of this Legal Memorandum are set forth below.

In recent decades, U.S. and foreign biopharmaceutical companies (makers of drugs that are based on chemical compounds or biological materials, such as vaccines) and medical device manufacturers have been responsible for many cures and advances in treatment that have benefited patients’ lives.  New cancer treatments, medical devices, and other medical discoveries are being made at a rapid pace.

The biopharmaceutical industry is also a major generator of American economic growth and a high-technology leader.  The U.S. biopharmaceutical sector directly employs over 810,000 workers, supports 3.4 million American jobs across the country, contributed almost one-fourth of all domestic research and development (R&D) funded by U.S. businesses in 2013—more than any other single sector—and contributes roughly $790 billion a year to the American economy, according to one study.   American biopharmaceutical firms collaborate with hospitals, universities, and research institutions around the country to provide clinical trials and treatments and to create new jobs.  Their products also boost workplace productivity by treating medical conditions, thereby reducing absenteeism and disability leave.

Properly tailored and limited regulation of biopharmaceutical products and medical devices helps to promote public safety, but FDA regulations as currently designed hinder and slow the innovation process and retard the diffusion of medical improvements.  Specifically, research indicates that current regulatory norms and the delays they engender unnecessarily bloat costs, discourage research and development, slow the pace of health improvements for millions of Americans, and harm the American economy.  These factors should be kept in mind by Congress and the Administration as they study how best to reform (and, where appropriate, eliminate) FDA regulation of drugs and medical devices.  (One particular reform that appears to be unequivocally beneficial and thus worthy of immediate consideration is the prohibition of any FDA restrictions on truthful speech concerning off-label drug uses—speech that benefits consumers and enjoys First Amendment protection.)  Reducing the burdens imposed on inventors by the FDA would allow more drugs to get to the market more quickly so that patients could pursue new and potentially lifesaving treatments.

While we all wait on pins and needles for the DC Circuit to issue its long-expected ruling on the FCC’s Open Internet Order, another federal appeals court has pushed back on Tom Wheeler’s FCC for its unremitting “just trust us” approach to federal rulemaking.

The case, round three of Prometheus, et al. v. FCC, involves the FCC’s long-standing rules restricting common ownership of local broadcast stations and their extension by Tom Wheeler’s FCC to the use of joint sales agreements (JSAs). (For more background see our previous post here). Once again the FCC lost (it’s now only 1 for 3 in this case…), as the Third Circuit Court of Appeals took the Commission to task for failing to establish that its broadcast ownership rules were still in the public interest, as required by law, before it decided to extend those rules.

While much of the opinion deals with the FCC’s unreasonable delay (of more than 7 years) in completing two Quadrennial Reviews in relation to its diversity rules, the court also vacated the FCC’s rule expanding its duopoly rule (or local television ownership rule) to ban joint sales agreements without first undertaking the reviews.

We (the International Center for Law and Economics, along with affiliated scholars of law, economics, and communications) filed an amicus brief arguing for precisely this result, noting that

the 2014 Order [] dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest.

The Third Circuit agreed that the FCC utterly failed to justify its continued foray into banning potentially pro-competitive arrangements, finding that

the Commission violated § 202(h) by expanding the reach of the ownership rules without first justifying their preexisting scope through a Quadrennial Review. In Prometheus I we made clear that § 202(h) requires that “no matter what the Commission decides to do to any particular rule—retain, repeal, or modify (whether to make more or less stringent)—it must do so in the public interest and support its decision with a reasoned analysis.” Prometheus I, 373 F.3d at 395. Attribution of television JSAs modifies the Commission’s ownership rules by making them more stringent. And, unless the Commission determines that the preexisting ownership rules are sound, it cannot logically demonstrate that an expansion is in the public interest. Put differently, we cannot decide whether the Commission’s rationale—the need to avoid circumvention of ownership rules—makes sense without knowing whether those rules are in the public interest. If they are not, then the public interest might not be served by closing loopholes to rules that should no longer exist.

Perhaps this decision will be a harbinger of good things to come. The FCC — and especially Tom Wheeler’s FCC — has a history of failing to justify its rules with anything approaching rigorous analysis. The Open Internet Order is a case in point. We will all be better off if courts begin to hold the Commission’s feet to the fire and throw out their rules when the FCC fails to do the work needed to justify them.

It appears that White House’s zeal for progressive-era legal theory has … progressed (or regressed?) further. Late last week President Obama signed an Executive Order that nominally claims to direct executive agencies (and “strongly encourages” independent agencies) to adopt “pro-competitive” policies. It’s called Steps to Increase Competition and Better Inform Consumers and Workers to Support Continued Growth of the American Economy, and was produced alongside an issue brief from the Council of Economic Advisors titled Benefits of Competition and Indicators of Market Power.

TL;DR version: the Order and its brief do not appear so much aimed at protecting consumers or competition, as they are at providing justification for favored regulatory adventures.

In truth, it’s not exactly clear what problem the President is trying to solve. And there is language in both the Order and the brief that could be interpreted in a positive light, and, likewise, language that could be more of a shot across the bow of “unruly” corporate citizens who have not gotten in line with the President’s agenda. Most of the Order and the corresponding CEA brief read as a rote recital of basic antitrust principles: price fixing bad, collusion bad, competition good. That said, there were two items in the Order that particularly stood out.

The (Maybe) Good

Section 2 of the Order states that

Executive departments … with authorities that could be used to enhance competition (agencies) shall … use those authorities to promote competition, arm consumers and workers with the information they need to make informed choices, and eliminate regulations that restrict competition without corresponding benefits to the American public. (emphasis added)

Obviously this is music to the ears of anyone who has thought that agencies should be required to do a basic economic analysis before undertaking brave voyages of regulatory adventure. And this is what the Supreme Court was getting at in Michigan v. EPA when it examined the meaning of the phrase “appropriate” in connection with environmental regulations:

One would not say that it is even rational, never mind “appropriate,” to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits.

Thus, if this Order follows the direction of Michigan v. EPA, and it becomes the standard for agencies to conduct cost-benefit analyses before issuing regulation (and to review old regulations through such an analysis), then wonderful! Moreover, this mandate to agencies to reduce regulations that restrict competition could lead to an unexpected reformation of a variety of regulations – even outside of the agencies themselves. For instance, the FTC is laudable in its ongoing efforts both to correct anticompetitive state licensing laws as well as to resist state-protected incumbents, such as taxi-cab companies.

Still, I have trouble believing that the President — and this goes for any president, really, regardless of party — would truly intend for agencies under his control to actually cede regulatory ground when a little thing like economic reality points in a different direction than official policy. After all, there was ample information available that the Title II requirements on broadband providers would be both costly and result in reduced capital expenditures, and the White House nonetheless encouraged the FCC to go ahead with reclassification.

And this isn’t the first time that the President has directed agencies to perform retrospective review of regulation (see the Identifying and Reducing Regulatory Burdens Order of 2012). To date, however, there appears to be little evidence that the burdens of the regulatory state have lessened. Last year set a record for the page count of the Federal Register (80k+ pages), and the data suggest that the cost of the regulatory state is only increasing. Thus, despite the pleasant noises the Order makes with regard to imposing economic discipline on agencies – and despite the good example Canada has set for us in this regard – I am not optimistic of the actual result.

And the (maybe) good builds an important bridge to the (probably) bad of the Order. It is well and good to direct agencies to engage in economic calculation when they write and administer regulations, but such calculation must be in earnest, and must be directed by the learning that was hard earned over the course of the development of antitrust jurisprudence in the US. As Geoffrey Manne and Josh Wright have noted:

Without a serious methodological commitment to economic science, the incorporation of economics into antitrust is merely a façade, allowing regulators and judges to select whichever economic model fits their earlier beliefs or policy preferences rather than the model that best fits the real‐world data. Still, economic theory remains essential to antitrust law. Economic analysis constrains and harnesses antitrust law so that it protects consumers rather than competitors.

Unfortunately, the brief does not indicate that it is interested in more than a façade of economic rigor. For instance, it relies on the outmoded 50 firm revenue concentration numbers gathered by the Census Bureau to support the proposition that the industries themselves are highly concentrated and, therefore, are anticompetitive. But, it’s been fairly well understood since the 1970s that concentration says nothing directly about monopoly power and its exercise. In fact, concentration can often be seen as an indicator of superior efficiency that results in better outcomes for consumers (depending on the industry).

The (Probably) Bad

Apart from general concerns (such as having a host of federal agencies with no antitrust expertise now engaging in competition turf wars) there is one specific area that could have a dramatically bad result for long term policy, and that moreover reflects either ignorance or willful blindness of antitrust jurisprudence. Specifically, the Order directs agencies to

identify specific actions that they can take in their areas of responsibility to build upon efforts to detect abuses such as price fixing, anticompetitive behavior in labor and other input markets, exclusionary conduct, and blocking access to critical resources that are needed for competitive entry. (emphasis added).

It then goes on to say that

agencies shall submit … an initial list of … any specific practices, such as blocking access to critical resources, that potentially restrict meaningful consumer or worker choice or unduly stifle new market entrants (emphasis added)

The generally uncontroversial language regarding price fixing and exclusionary conduct are bromides – after all, as the Order notes, we already have the FTC and DOJ very actively policing this sort of conduct. What’s novel here, however, is that the highlighted language above seems to amount to a mandate to executive agencies (and a strong suggestion to independent agencies) that they begin to seek out “essential facilities” within their regulated industries.

But “critical resources … needed for competitive entry” could mean nearly anything, depending on how you define competition and relevant markets. And asking non-antitrust agencies to integrate one of the more esoteric (and controversial) parts of antitrust law into their mission is going to be a recipe for disaster.

In fact, this may be one of the reasons why the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.

In short, the essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.” One important reason for the broad criticism is because

At bottom, a plaintiff … is saying that the defendant has a valuable facility that it would be difficult to reproduce … But … the fact that the defendant has a highly valued facility is a reason to reject sharing, not to require it, since forced sharing “may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” (quoting Trinko)

Further, it’s really hard to say when one business is so critical to a particular market that its own internal functions need to be exposed for competitors’ advantage. For instance, is Big Data – which the CEA brief specifically notes as a potential “critical resource” — an essential facility when one company serves so many consumers that it has effectively developed an entire market that it dominates? ( In case you are wondering, it’s actually not). When exactly does a firm so outcompete its rivals that access to its business infrastructure can be seen by regulators as “essential” to competition? And is this just a set-up for punishing success — which hardly promotes competition, innovation or consumer welfare?

And, let’s be honest here, when the CEA is considering Big Data as an essential facility they are at least partially focused on Google and its various search properties. Google is frequently the target for “essentialist” critics who argue, among other things, that Google’s prioritization of its own properties in its own search results violates antitrust rules. The story goes that Google search is so valuable that when Google publishes its own shopping results ahead of its various competitors, it is engaging in anticompetitive conduct. But this is a terribly myopic view of what the choices are for search services because, as Geoffrey Manne has so ably noted before, “competitors denied access to the top few search results at Google’s site are still able to advertise their existence and attract users through a wide range of other advertising outlets[.]”

Moreover, as more and more users migrate to specialized apps on their mobile devices for a variety of content, Google’s desktop search becomes just one choice among many for finding information. All of this leaves to one side, of course, the fact that for some categories, Google has incredibly stiff competition.

Thus it is that

to the extent that inclusion in Google search results is about “Stiglerian” search-cost reduction for websites (and it can hardly be anything else), the range of alternate facilities for this function is nearly limitless.

The troubling thing here is that, given the breezy analysis of the Order and the CEA brief, I don’t think the White House is really considering the long-term legal and economic implications of its command; the Order appears to be much more about political support for favored agency actions already under way.

Indeed, despite the length of the CEA brief and the variety of antitrust principles recited in the Order itself, an accompanying release points to what is really going on (at least in part). The White House, along with the FCC, seems to think that the embedded streams in a cable or satellite broadcast should be considered a form of essential facility that is an indispensable component of video consumers’ choice (which is laughable given the magnitude of choice in video consumption options that consumers enjoy today).

And, to the extent that courts might apply the (controversial) essential facilities doctrine, an “indispensable requirement … is the unavailability of access to the ‘essential facilities’[.]” This is clearly not the case with much of what the CEA brief points to as examples of ostensibly laudable pro-competitive regulation.

The doctrine wouldn’t apply, for instance, to the FCC’s Open Internet Order since edge providers have access to customers over networks, even where network providers want to zero-rate, employ usage-based billing or otherwise negotiate connection fees and prioritization. And it also doesn’t apply to the set-top box kerfuffle; while third-parties aren’t able to access the video streams that make-up a cable broadcast, the market for consuming those streams is a single part of the entire video ecosystem. What really matters there is access to viewers, and the ability to provide services to consumers and compete for their business.

Yet, according to the White House, “the set-top box is the mascot” for the administration’s competition Order, because, apparently, cable boxes represent “what happens when you don’t have the choice to go elsewhere.” ( “Elsewhere” to the White House, I assume, cannot include Roku, Apple TV, Hulu, Netflix, and a myriad of other video options  that consumers can currently choose among.)

The set-top box is, according to the White House, a prime example of the problem that

[a]cross our economy, too many consumers are dealing with inferior or overpriced products, too many workers aren’t getting the wage increases they deserve, too many entrepreneurs and small businesses are getting squeezed out unfairly by their bigger competitors, and overall we are not seeing the level of innovative growth we would like to see.

This is, of course, nonsense. Consumers enjoy an incredible amount of low-cost, high quality goods (including video options) – far more than at any point in history.  After all:

From cable to Netflix to Roku boxes to Apple TV to Amazon FireStick, we have more ways to find and watch TV than ever — and we can do so in our living rooms, on our phones and tablets, and on seat-back screens at 30,000 feet. Oddly enough, FCC Chairman Tom Wheeler … agrees: “American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.”

Thus, I suspect that the White House has its eye on a broader regulatory agenda.

For instance, the Department of Labor recently announced that it would be extending its reach in the financial services industry by changing the standard for when financial advice might give rise to a fiduciary relationship under ERISA. It seems obvious that the SEC or FINRA could have taken up the slack for any financial services regulatory issues – it’s certainly within their respective wheelhouses. But that’s not the direction the administration took, possibly because SEC and FINRA are independent agencies. Thus, the DOL – an agency with substantially less financial and consumer protection experience than either the SEC or FINRA — has expansive new authority.

And that’s where more of the language in the Order comes into focus. It directs agencies to “ensur[e] that consumers and workers have access to the information needed to make informed choices[.]” The text of the DOL rule develops for itself a basis in competition law as well:

The current proposal’s defined boundaries between fiduciary advice, education, and sales activity directed at large plans, may bring greater clarity to the IRA and plan services markets. Innovation in new advice business models, including technology-driven models, may be accelerated, and nudged away from conflicts and toward transparency, thereby promoting healthy competition in the fiduciary advice market.

Thus, it’s hard to see what the White House is doing in the Order, other than laying the groundwork for expansive authority of non-independent executive agencies under the thin guise of promoting competition. Perhaps the President believes that couching this expansion in free market terms ( i.e. that its “pro-competition”) will somehow help the initiatives go through with minimal friction. But there is nothing in the Order or the CEA brief to provide any confidence that competition will, in fact, be promoted. And in the end I have trouble seeing how this sort of regulatory adventurism does not run afoul of separation of powers issues, as well as assorted other legal challenges.

Finally, conjuring up a regulatory version of the essential facilities doctrine as a support for this expansion is simply a terrible idea — one that smacks much more of industrial policy than of sound regulatory reform or consumer protection.