[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Mark Jamison is the Gerald L. Gunter Memorial Professor and director of the Public Utility Research Center at the University of Florida’s Warrington College of Business. He’s also a visiting scholar at the American Enterprise Institute.]
Chairman Ajit Pai will be remembered as one of the most consequential Federal Communications Commission chairmen in history. His policy accomplishments are numerous, including the repeal of Title II regulation of the internet, rural broadband development, increased spectrum for 5G, decreasing waste in universal service funding, and better controlling robocalls.
Less will be said about the important work he has done rebuilding the FCC’s independence. It is rare for a new FCC chairman to devote resources to building the institution. Most focus on their policy agendas, because policies and regulations make up their legacies that the media notices, and because time and resources are limited. Chairman Pai did what few have even attempted to do: both build the organization and make significant regulatory reforms.
Independence is the ability of a regulatory institution to operate at arm’s length from the special interests of industry, politicians, and the like. The pressures to bias actions to benefit favored stakeholders can be tremendous; the FCC greatly influences who gets how much of the billions of dollars that are at stake in FCC decisions. But resisting those pressures is critical because investment and services suffer when a weak FCC is directed by political winds or industry pressures rather than law and hard analysis.
Chairman Pai inherited a politicized FCC. Research by Scott Wallsten showed that commission votes had been unusually partisan under the previous chairman (November 2013 through January 2017). From the beginning of Reed Hundt’s term as chairman until November 2013, only 4% of commission votes had divided along party lines. By contrast, 26% of votes divided along party lines from November 2013 until Chairman Pai took over. This division was also reflected in a sharp decline in unanimous votes under the previous administration. Only 47% of FCC votes on orders were unanimous, as opposed to an average of 60% from Hundt through the brief term of Mignon Clyburn.
Chairman Pai and his fellow commissioners worked to heal this divide. According to the FCC’s data, under Chairman Pai, over 80% of items on the monthly meeting agenda had bipartisan support and over 70% were adopted without dissent. This was hard, as Democrats in general were deeply against President Donald Trump and some members of Congress found a divided FCC convenient.
The political orientation of the FCC prior to Chairman Pai was made clear in the management of controversial issues. The agency’s work on net neutrality in 2015 pivoted strongly toward heavy regulation when President Barack Obama released his video supporting Title II regulation of the internet. And there is evidence that the net-neutrality decision was made in the White House, not at the FCC. Agency economists were cut out of internal discussions once the political decision had been made to side with the president, causing the FCC’s chief economist to quip that the decision was an economics-free zone.
On other issues, a vote on Lifeline was delayed several hours so that people on Capitol Hill could lobby a Democratic commissioner to align with fellow Democrats and against the Republican commissioners. And an initiative to regulate set-top boxes was buoyed, not by analyses by FCC staff, but by faulty data and analyses from Democratic senators.
Chairman Pai recognized the danger of politically driven decision-making and noted that it was enabled in part by the agency’s lack of a champion for economic analyses. To remedy this situation, Chairman Pai proposed forming an Office of Economics and Analytics (OEA). The commission adopted his proposal, but unfortunately it was with one of the rare party-line votes. Hopefully, Democratic commissioners have learned the value of the OEA.
The OEA has several responsibilities, but those most closely aligned with supporting the agency’s independence are that it: (a) provides economic analysis, including cost-benefit analysis, for commission actions; (b) develops policies and strategies on data resources and best practices for data use; and (c) conducts long-term research. The work of the OEA makes it hard for a politically driven chairman to pretend that his or her initiatives are somehow substantive.
Another institutional weakness at the FCC was a lack of transparency. Prior to Chairman Pai, the public was not allowed to view the text of commission decisions until after they were adopted. Even worse, sometimes the text that the commissioners saw when voting was not the text in the final decision. Wallsten described in his research a situation where the meaning of a vote actually changed from the time of the vote to the release of the text:
On February 9, 2011 the Federal Communications Commission (FCC) released a proposed rule that included, among many other provisions, capping the Universal Service Fund at $4.5 billion. The FCC voted to approve a final order on October 27, 2011. But when the order was finally released on November 18, 2011, the $4.5 billion ceiling had effectively become a floor, with the order requiring the agency to forever estimate demand at no less than $4.5 billion. Because payments from the fund had been decreasing steadily, this floor means that the FCC is now collecting hundreds of billions of dollars more in taxes than it is spending on the program. [footnotes omitted]
The lack of transparency led many to not trust the FCC and encouraged stakeholders with inside access to bypass the legitimate public process for lobbying the agency. This would have encouraged corruption had not Chairman Pai changed the system. He required that decision texts be released to the public at the same time they were released to commissioners. This allows the public to see what the commissioners are voting on. And it ensures that orders do not change after they are voted on.
The FCC demonstrated its independence under Chairman Pai. In the case of net neutrality, the three Republican commissioners withstood personal threats, mocking from congressional Democrats, and pressure from Big Tech to restore light-handed regulation. About a year later, Chairman Pai was strongly criticized by President Trump for rejecting the Sinclair-Tribune merger. And despite the president’s support of the merger, he apparently had sufficient respect for the FCC’s independence that the White House never contacted the FCC about the issue. In the case of Ligado Networks’ use of its radio spectrum license, the FCC stood up to intense pressure from the U.S. Department of Defense and from members of Congress who wanted to substitute their technical judgement for the FCC’s research on the impacts of Ligado’s proposal.
It is possible that a new FCC could undo this new independence. Commissioners could marginalize their economists, take their directions from partisans, and reintroduce the practice of hiding information from the public. But Chairman Pai foresaw this and carefully made his changes part of the institutional structure of the FCC, making any steps backward visible to all concerned.
Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development.
Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.
There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move.
What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets.
The importance of profit-and-loss to economic calculation
One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace.
This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs.
The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.
For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.
While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs.
For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit.
All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.
Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.
Municipal broadband as islands of chaos
Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.
The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.
This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes.
Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.
But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.
Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate.
Conclusion
There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Miranda Perry Fleischer, (Professor Law and Co-Director of Tax Programs at the University of San Diego School of Law); and Matt Zwolinski (Professor of Philosophy, University of San Diego; founder and director, USD Center for Ethics, Economics, and Public Policy; founder and contributor, Bleeding Heart Libertarians Blog)]
This week, Americans began receiving cold, hard cash from the government. Meant to cushion the economic fallout of Covid-19, the CARES Act provides households with relief payments of up to $1200 per adult and $500 per child. As we have written elsewhere, direct cash transfers are the simplest, least paternalistic, and most efficient way to protect Americans’ economic health – pandemic or not. The idea of simply giving people money has deep historical and wide ideological roots, culminating in Andrew Yang’s popularization of a universal basic income (“UBI”) during his now-suspended presidential campaign. The CARES Act relief provisions embody some of the potential benefits of a UBI, but nevertheless fail in key ways to deliver its true promise.
Provide Cash, No-Strings-Attached
Most promisingly, the relief payments are no-strings-attached. Recipients can use them as they – not the government – think best, be it for rent, food, or a laptop for a child to learn remotely. This freedom is a welcome departure from most current aid programs, which are often in-kind or restricted transfers. Kansas prohibits welfare recipients from using benefits at movie theaters and swimming pools. SNAP recipients cannot purchase “hot food” such as a ready-to-eat roasted chicken; California has a 17-page pamphlet identifying which foods users of Women, Infants and Children (“WIC”) benefits can buy (for example, white eggs but not brown).
These restrictions arise from a distrust of beneficiaries. Yet numerous studies show that recipients of cash transfers do not waste benefits on alcohol, drugs or gambling. Instead, beneficiaries in developing countries purchase livestock, metal roofs, or healthier food. In wealthier countries, cash transfers are associated with improvements in infant health, better nutrition, higher test scores, more schooling, and lower rates of arrest for young adults – all of which suggest beneficiaries do not waste cash.
Avoid Asset Tests
A second positive of the relief payments is that they eschew asset tests, unlike many welfare programs. For example, a family can lose hundreds of dollars of SNAP benefits if their countable assets exceed $2,250. Such limits act as an implicit wealth tax and discourage lower-income individuals from saving. Indeed, some recipients report engaging in transactions like buying furniture on lay-away (which does not count) to avoid the asset limits. Lower-income individuals, for whom a car repair bill or traffic ticket can lead to financial ruin, should be encouraged to – not penalized for – saving for a rainy day.
Don’t Worry So Much about the Labor Market
A third pro is that the direct relief payments are not tied to a showing of desert. They do not require one to work, be looking for work, or show that one is either unable to work or engaged in a substitute such as child care or school. Again, this contrasts with most current welfare programs. SNAP requires able-bodied childless adults to work or participate in training or education 80 hours a month. Supplemental Security Income requires non-elderly recipients to prove that they are blind or disabled. Nor do the relief payments require recipients to pass a drug test, or prove they have no criminal record.
As with spending restrictions, these requirements display distrust of beneficiaries. The fear is that “money for nothing” will encourage low-income individuals to leave their jobs en masse. But this fear, too, is largely overblown. Although past experiments with unconditional transfers show that total work hours drop, the bulk of this drop is from teenagers staying in school longer, new mothers delaying entrance into the workforce, and primary earners reducing their hours from say, 60 to 50 hours a week. We could also imagine UBI recipients spending time volunteering, engaging in the arts, or taking care of friends and relatives. None of these are necessarily bad things.
Don’t Limit Aid to the “Deserving”
On these three counts, the CARES Act embraces the promise of a UBI. But the CARES Act departs from key aspects of a well-designed, true UBI. Most importantly, the size of the relief payments – one-time transfers of $1200 per adult – pale in comparison to the Act’s enhanced unemployment benefits of $600/week. This mismatch underscores how deeply ingrained our country’s obsession with helping only the “deserving” poor is and how narrowly “desert” is defined. The Act’s most generous aid is limited to individuals with pre-existing connections to the formal labor market who leave under very specific conditions. Someone who cannot work because they are caring for a family member sick with COVID-19 qualifies, but not an adult child who left a job months ago to care for an aging parent with Alzheimer’s. A parent who cannot work because her child’s school was cancelled due to the pandemic qualifies, but not a parent who hasn’t worked the past couple years due to the lack of affordable child care. And because unemployment benefits not only turn on being previously employed but also rise the higher one’s past wages were, this mismatch magnifies that our safety net helps the slightly poor much more than the very poorest among us.
Don’t Impose Bureaucratic Hurdles
The botched roll-out of the enhanced unemployment benefits illustrates another downside to targeting aid only to the “deserving”: It is far more complicated than giving aid to all who need it. Guidance for self-employed workers (newly eligible for such benefits) is still forthcoming. Individuals with more than one employer before the crisis struggle to input multiple jobs in the system, even though their benefits increase as their past wages do. Even college graduates have trouble completing the clunky forms; a friend who teaches yoga had to choose between “aqua fitness instructor” and “physical education” when listing her job.
These frustrations are just another example of the government’s ineptitude at determining who is and is not work capable – even in good times. Often, the very people that can navigate the system to convince the government they are unable to work are actually the most work-capable. Those least capable of work, unable to navigate the system, receive nothing. And as millions of Americans spend countless hours on the phone and navigating crashing websites, they are learning what has been painfully obvious to many lower-income individuals for years – the government often puts insurmountable barriers in the way of even the “deserving poor.” These barriers – numerous office visits, lengthy forms, drug tests – are sometimes so time consuming that beneficiaries must choose between obtaining benefits to which they are legally entitled and applying for jobs or working extra hours. Lesson one from the CARES Act is that universal payments, paid to all, avoid these pitfalls.
Don’t Means Test Up Front
The CARES Act contains three other flaws that a well-designed UBI would also fix. First, the structure of the cash transfers highlights the drawbacks of upfront means testing. In an attempt to limit aid to Americans in financial distress, the $1200 relief payments begin to phase-out at five cents on the dollar when income exceeds a certain threshold: $75,000 for childless, single individuals and $150,000 for married couples. The catch is that for most Americans, their 2019 or 2018 incomes will determine whether their relief payments phase-out – and therefore how much aid they receive now, in 2020. In a world where 22 million Americans have filed for unemployment in the past month, looking to one or two-year old data to determine need is meaningless. Many Americans whose pre-pandemic incomes exceeded the threshold are now struggling to make mortgage payments and put food on the table, but will receive little or no direct cash aid under the CARES Act until April of 2021.
This absurdity magnifies a problem inherent in ex ante means tests. Often, one’s past financial status does not tell us much about an individual’s current needs. This is particularly true when incomes fluctuate from period to period, as is the case with many lower-income workers. Imagine a fast food worker and SNAP beneficiary whose schedule changes month to month, if not week to week. If she is lucky enough to work a lot in November, she may see her December SNAP benefits reduced. But what if her boss gives her fewer shifts in December? Both her paycheck and her SNAP benefits will be lower in December, leaving her struggling.
The solution is to send cash to all Americans, and recapture the transfer through the income tax system. Mathematically, an ex post tax is exactly the same as an ex ante phase out. Consider the CARES Act. A childless single individual with an income of $85,000 is $10,000 over the threshold, reducing her benefit by $500 and netting her $700. Giving her a check for $1200 and taxing her an additional 5% on income above $75,000 also nets her $700. As a practical matter, however, an ex post tax is more accurate because hindsight is 20-20. Lesson two from the CARES Act is that universal payments offset by taxes are superior to ex ante means-testing.
Provide Regular Payments
Third, the CARES Act provides one lump sum payment, with struggling Americans wondering whether Congress will act again. This is a missed opportunity: Studies show that families receiving SNAP benefits face challenges planning for even a month at a time. Lesson three is that guaranteed monthly or bi-weekly payments – as a true UBI would provide — would help households plan and provide some peace of mind amidst this uncertainty.
Provide Equal Payments to Children and Adults
Finally, the CARES Act provides a smaller benefit to children than adults. This is nonsensical. A single parent with two children faces greater hardship than a married couple with one child, as she has the same number of mouths to feed with fewer earners. Further, social science evidence suggests that augmenting family income has positive long-run consequences for children. Lesson four from the CARES Act – the empirical case for a UBI is strongest for families with children.
It’s Better to Be Overly, not Underly, Generous
The Act’s direct cash payments are a step in the right direction. But they demonstrate that not all cash assistance plans are created equal. Uniform and periodic payments to all – regardless of age and one’s relationship to the workforce – is the best way to protect Americans’ economic health, pandemic or not. This is not the time to be stingy or moralistic in our assistance. Better to err on the side of being overly generous now, especially when we can correct that error later through the tax system. Errors that result in withholding aid from those who need it, alas, might not be so easy to correct.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
The COVID-19 pandemic and the shutdown of many public-facing businesses has resulted in many sudden shifts in demand for common goods. The demand for hand sanitizer has drastically increased for hospitals, businesses, and individuals. At the same time, demand for distilled spirits has fallen substantially, as the closure of bars, restaurants, and tasting rooms has cut craft distillers off from their primary buyers. Since ethanol is a key ingredient in both spirits and sanitizer, this situation presents an obvious opportunity for distillers to shift their production from the former to the latter. Hundreds of distilleries have made this transition, but it has not without obstacles. Some of these reflect a real scarcity of needed supplies, but other constraints have been externally imposed by government regulations and the tax code.
Producing sanitizer
The World Health Organization provides guidelines and recipes for locally producing hand sanitizer. The relevant formulation for distilleries calls for only four ingredients: high-proof ethanol (96%), hydrogen peroxide (3%), glycerol (98%), and sterile distilled or boiled water. Distilleries are well-positioned to produce or obtain ethanol and water. Glycerol is used in only small amounts and does not currently appear to be a substantial constraint on production. Hydrogen peroxide is harder to come by, but distilleries are adapting and cooperating to ensure supply. Skip Tognetti, owner of Letterpress Distilling in Seattle, Washington, reports that one local distiller obtained a drum of 34% hydrogen peroxide, which stretches a long way when diluted to a concentration of 3%. Local distillers have been sharing this drum so that they can all produce sanitizer.
Another constraint is finding containers in which to the put the finished product. Not all containers are suitable for holding high-proof alcoholic solutions, and supplies of those that are recommended for sanitizer are scarce. The fact that many of these bottles are produced in China has reportedly also limited the supply. Distillers are therefore having to get creative; Tognetti reports looking into shampoo bottles, and in Chicago distillers have re-purposed glass beer growlers. For informal channels, some distillers have allowed consumers to bring their own containers to fill with sanitizer for personal use. Food and Drug Administration labeling requirements have also prevented the use of travel-size bottles, since the bottles are too small to display the necessary information.
The raw materials for producing ethanol are also coming from some unexpected sources. Breweries are typically unable to produce alcohol at high enough proof for sanitizer, but multiple breweries in Chicago are donating beer that distilleries can bring up to the required purity. Beer giant Anheuser-Busch is also producing sanitizer with the ethanol removed from its alcohol-free beers.
In many cases, the sanitizer is donated or sold at low-cost to hospitals and other essential services, or to local consumers. Online donations have helped to fund some of these efforts, and at least one food and beverage testing lab has stepped up to offer free testing to breweries and distilleries producing sanitizer to ensure compliance with WHO guidelines. Distillers report that the regulatory landscape has been somewhat confusing in recent weeks, and posts in a Facebook group have provided advice for how to get through the FDA’s registration process. In general, distillers going through the process report that agencies have been responsive. Tom Burkleaux of New Deal Distilling in Portland, Oregon says he “had to do some mighty paperwork,” but that the FDA and the Oregon Board of Pharmacy were both quick to process applications, with responses coming in just a few hours or less.
In general, the redirection of craft distilleries to producing hand sanitizer is an example of private businesses responding to market signals and the evident challenges of the health crisis to produce much-needed goods; in some cases, sanitizer represents one of their only sources of revenue during the shutdown, providing a lifeline for small businesses. The Distilled Spirits Council currently lists nearly 600 distilleries making sanitizer in the United States.
There is one significant obstacle that has hindered the production of sanitizer, however: an FDA requirement that distilleries obtain extra ingredients to denature their alcohol.
Denaturing sanitizer
According to the WHO, the four ingredients mentioned above are all that are needed to make sanitizer. In fact, WHO specifically notes that it in most circumstances it is inadvisable to add anything else: “it is not recommended to add any bittering agents to reduce the risk of ingestion of the handrubs” except in cases where there is a high probably of accidental ingestion. Further, “[…] there is no published information on the compatibility and deterrent potential of such chemicals when used in alcohol-based handrubs to discourage their abuse. It is important to note that such additives may make the products toxic and add to production costs.”
Denaturing agents are used to render alcohol either too bitter or too toxic to consume, deterring abuse by adults or accidental ingestion by children. In ordinary circumstances, there are valid reasons to denature sanitizer. In the current pandemic, however, the denaturing requirement is a significant bottleneck in production.
The federal Tax and Trade Bureau is the primary agency regulating alcohol production in the United States. The TTB took action early to encourage distilleries to produce sanitizer, officially releasing guidance on March 18 instructing them that they are free to commence production without prior authorization or formula approval, so long as they are making sanitizer in accordance with WHO guidelines. On March 23, the FDA issued its own emergency authorization of hand sanitizer production; unlike the WHO, FDA guidance does require the use of denaturants. As a result, on March 26 the TTB issued new guidance to be consistent with the FDA.
Under current rules, only sanitizer made with denatured alcohol is exempt from the federal excise tax on beverage alcohol. Federal excise taxes begin at $2.70 per gallon for low-volume distilleries and reach up to $13.50 per gallon, significantly increasing the cost of producing hand sanitizer; state excise taxes can raise these costs even higher.
To be clear, if I didn’t have to track down denaturing agents (there are several, but isopropyl alcohol is the most common), I could turn out 200 gallons of finished hand sanitizer TODAY.
(As an additional concern, the Distilled Spirits Council notes that the extremely bitter or toxic nature of denaturing agents may impose additional costs on distillers given the need to thoroughly cleanse them from their equipment.)
Congress attempted to address these concerns in the CARES Act, the coronavirus relief package. Section 2308 explicitly waives the federal excise tax on distilled spirits used for the production of sanitizer, however it leaves the formula specification in the hands of the FDA. Unless the agency revises its guidance, production in the US will be constrained by the requirement to add denaturing agents to the plentiful supply of ethanol, or distilleries will risk being targeted with enforcement actions if they produce perfectly usable sanitizer without denaturing their alcohol.
Local distilleries provide agile production capacity
In recent days, larger spirits producers including Pernod-Ricard, Diageo, and Bacardi have announced plans to produce sanitizer. Given their resources and economies of scale, they may end up taking over a significant part of the market. Yet small, local distilleries have displayed the agility necessary to rapidly shift production. It’s worth noting that many of these distilleries did not exist until fairly recently. According to the American Craft Spirits Association, there were fewer than 100 craft distilleries operating in the United States in 2005. By 2018, there were more than 1,800. This growth is the result of changing consumer interests, but also the liberalization of state and local laws to permit distilleries and tasting rooms. That many of these distilleries have the capacity to produce sanitizer in a time of emergency is a welcome, if unintended, consequence of this liberalization.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Mark Jamison, (Director and Gunter Professor, Public Utility Research Center, University of Florida and Visiting Scholar with the American Enterprise Institute.).]
The economic impacts of the coronavirus pandemic, and of the government responses to it, are significant and could be staggering, especially for small businesses. Goldman Sachs estimates a potential 24% drop in US GDP for the second quarter of 2020 and a 4% decline for the year. Its small business survey found that a little over half of small businesses might last for less than three months in this economic downturn. Small business employs nearly 60 million people in the US. How many will be out of work this year is anyone’s guess, but the number will be large.
What should small businesses do? First, focus on staying in business because their customers and employees need them to be healthy when the economy begins to recover. That will certainly mean slowing down business activity and decreasing payroll to manage losses, and managing liquidity.
Second, look for opportunities in the present crisis. Consumers are slowing their spending, but they will spend for things they still need and need now. And there will be new demand for things they didn’t need much before, like more transportation of food, support for health needs, and crisis management. Which business sectors will recover first? Those whose downturns represented delayed demand, such as postponed repairs and business travel, rather than evaporated demand, such as luxury items.
Third, they can watch for and take advantage of government support programs. Many programs simply provide low-cost loans, which do not solve the small-business problem of customers not buying: Borrowing money to meet payroll for idle workers simply delays business closure and makes bankruptcy more likely. But some grants and tax breaks are under discussion (see below).
Fourth, they can renegotiate loans and contracts. One of the mistakes lenders made in the past is holding stressed borrowers’ feet to the fire, which only led to more, and more costly loan defaults. At least some lenders have learned. So lenders and some suppliers might be willing to receive some payments rather than none.
What should government do? Unfortunately, Washington seems to think that so-called stimulus spending is the cure for any economic downturn. This isn’t true. I’ll explain why below, but let me first get to what is more productive.
The major problem is that customers are unable to buy and businesses are unable to produce because of the responses to the coronavirus. Sometimes transactions are impossible, but there are times where buying and selling is simply made more costly by the pandemic and the government responses. So government support for the economy should address these problems directly.
For buyers, government officials should recognize that buying is hard and costly for them. So policies should include improving their abilities to buy during this time. Sales tax holidays, especially on healthcare, food, and transportation would be helpful.
Waivers of postal fees would make e-commerce cheaper. And temporary support for fixed costs, such as mortgages, would free money for other things. Tax breaks for the gig economy would lower service costs and provide new employment opportunities. And tax credits for durables like home improvements would lower costs of social distancing.
But the better opportunities for government impact are on the business side because small business affects both the supply of services and the incomes of consumers.
For small business policy, my American Enterprise Institute colleagues Glenn Hubbard and Michael Strain have done the most thoughtful work that I have seen. They note that the problems for small businesses are that they do not have enough business activity to meet payroll and other bills. This means that “(t)he goal should be to replace a large portion of the revenue (not just the payroll expenses) those businesses would have generated in the absence of being shut down due to the coronavirus.”
They suggest policies to replace 80 percent of the small business revenue loss. How? By providing grants in the form of government-backed commercial loans that are forgiven if the business continues and maintains payroll, subject to workers being allowed to quit if they find better opportunities.
What else might work? Tax breaks that lower business costs. These can be breaks in payroll taxes, marginal income tax rates, equipment purchases, permitting, etc., including tax holidays. Rollback of current business losses would trigger tax refunds that improve businesses finances.
One of the least useful ideas for small businesses is interest-free loans. These might be great for large businesses who are largely managing their financial positions. But such loans fail to address the basic small business problem of keeping the doors open when customers aren’t buying.
Finally, why doesn’t traditional stimulus work, even in other times of economic downturn? Traditional spending-based stimulus assumes that the economic problem is that people want to build things, but not buy them. That’s not a very good assumption. Especially today, where the problems are the higher cost of buying, or perhaps the impossibility of buying with social distancing, and the higher costs of doing businesses. Keeping businesses in business is the key to supporting the economy.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]
Across the globe, millions of people are rapidly coming to terms with the harsh realities of life under lockdown. As governments impose ever-greater social distancing measures, many of the daily comforts we took for granted are no longer available to us.
And yet, we can all take solace in the knowledge that our current predicament would have been far less tolerable if the COVID-19 outbreak had hit us twenty years ago. Among others, we have Big Tech firms to thank for this silver lining.
Contrary to the claims of critics, such as Senator Josh Hawley, Big Tech has produced game-changing innovations that dramatically improve our ability to fight COVID-19.
The previous post in this series showed that innovations produced by Big Tech provide us with critical information, allow us to maintain some level of social interactions (despite living under lockdown), and have enabled companies, universities and schools to continue functioning (albeit at a severely reduced pace).
But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?
One of the most underappreciated ways in which technology (mostly pioneered by Big Tech firms) is helping the world deal with COVID-19 has been a rapid shift towards contactless economic transactions. Not only are consumers turning towards digital goods to fill their spare time, but physical goods (most notably food) are increasingly being exchanged without any direct contact.
These ongoing changes would be impossible without the innovations and infrastructure that have emerged from tech and telecommunications companies over the last couple of decades.
Of course, the overall picture is still bleak. The shift to contactless transactions has only slightly softened the tremendous blow suffered by the retail and restaurant industries – some predictions suggest their overall revenue could fall by at least 50% in the second quarter of 2020. Nevertheless, as explained below, this situation would likely be significantly worse without the many innovations produced by Big Tech companies. For that we would be thankful.
1. Food and other goods
For a start, the COVID-19 outbreak (and government measures to combat it) has caused many brick & mortar stores and restaurants to shut down. These closures would have been far harder to implement before the advent of online retail and food delivery platforms.
At the time of writing, e-commerce websites already appear to have witnessed a 20-30% increase in sales (other sources report 52% increase, compared to the same time last year). This increase will likely continue in the coming months.
The Amazon Retail platform has been at the forefront of this online shift.
Having witnessed a surge in online shopping, Amazon announced that it would be hiring 100.000 distribution workers to cope with the increased demand. Amazon’s staff have also been asked to work overtime in order to meet increased demand (in exchange, Amazon has doubled their pay for overtime hours).
To attract these new hires and ensure that existing ones continue working, Amazon simultaneously announced that it would be increasing wages in virus-hit countries (from $15 to $17, in the US) .
Amazon also stopped accepting “non-essential” goods in its warehouses, in order to prioritize the sale of household essentials and medical goods that are in high demand.
Finally, in Italy, Amazon decided not to stop its operations, despite some employees testing positive for COVID-19. Controversial as this move may be, Amazon’s private interests are aligned with those of society – maintaining the supply of essential goods is now more important than ever.
And it is not just Amazon that is seeking to fill the breach left temporarily by brick & mortar retail. Other retailers are also stepping up efforts to distribute their goods online.
The apps of traditional retail chains have witnessed record daily downloads (thus relying on the smartphone platforms pioneered by Google and Apple).
Walmart has become the go-to choice for online food purchases:
Given the drastically lower activity within their brick & mortar stores, Walmart and Target, among others, have announced they would make their parking lots available for drive-thru testing.
The shift to online shopping mimics what occurred in China, during its own COVID-19 lockdown.
According to an article published in HBR, e-commerce penetration reached 36.6% of retail sales in China (compared to 29.7% in 2019). The same article explains how Alibaba’s technology is enabling traditional retailers to better manage their supply chains, ultimately helping them to sell their goods online.
A study by Nielsen ratings found that 67% of retailers would expand online channels.
Spurred by compassion and/or a desire to boost its brand abroad, Alibaba and its founder, Jack Ma, have made large efforts to provide critical medical supplies (notably tests kits and surgical masks) to COVID-hit countries such as the US and Belgium.
And it is not just retail that is adapting to the outbreak. Many restaurants are trying to stay afloat by shifting from in-house dining to deliveries. These attempts have been made possible by the emergence of food delivery platforms, such as UberEats and Deliveroo.
These platforms have taken several steps to facilitate food deliveries during the outbreak.
UberEats announced that it would be waiving delivery fees for independent restaurants.
Both UberEats and Deliveroo have put in place systems for deliveries to take place without direct physical contact. While not entirely risk-free, meal delivery can provide welcome relief to people experiencing stressful lockdown conditions.
Similarly, the shares of Blue Apron – an online meal-kit delivery service – have surged more than 600% since the start of the outbreak.
In short, COVID-19 has caused a drastic shift towards contactless retail and food delivery services. It is an open question how much of this shift would have been possible without the pioneering business model innovations brought about by Amazon and its online retail platform, as well as modern food delivery platforms, such as UberEats and Deliveroo. At the very least, it seems unlikely that it would have happened as fast.
The entertainment industry is another area where increasing digitization has made lockdowns more bearable. The reason is obvious: locked-down consumers still require some form of amusement. With physical supply chains under tremendous strain, and social gatherings no longer an option, digital media has thus become the default choice for many.
Data published by Verizon shows a sharp increase (in the week running from March 9 to March 16) in the consumption of digital entertainment, especially gaming:
This echoes other sources, which also report that the use of traditional streaming platforms has surged in areas hit by COVID-19.
Netflix subscriptions are said to be spiking in locked-down communities. During the first week of March, Netflix installations increased by 77% in Italy and 33% in Spain, compared to the February average. Netflix app downloads increased by 33% in Hong kong and South Korea. The Amazon Prime app saw a similar increase.
Disney Plus has also been highly popular. According to one source, half of US homes with children under the age of 10 purchased a Disney Plus subscription. This trend is expected to continue during the COVID-19 outbreak. Disney even released Frozen II three months ahead of schedule in order to boost new subscriptions.
Hollywood studios have started releasing some of their lower-profile titles directly on streaming services.
Traffic has also increased significantly on popular gaming platforms.
According to the CEO of Verizon, gaming hours have gone up 75% since the start of COVID-19 lockdows, in early March.
Fortnite is also experiencing increased usage. In Italy, of example, game time is said to have increased by 70% since the beginning of the outbreak.
EA’s Call of Duty: Warzone achieved a record 15 million downloads in the first three days following its release. Its release likely led to the biggest peak in network usage, in the UK, since the start of the COVID-19 outbreak.
These are just a tiny sample of the many ways in which digital entertainment is filling the void left by social gatherings. It is thus central to the lives of people under lockdown.
2. Cashless payments
But all of the services that are listed above rely on cashless payments – be it to limit the risk or contagion or because these transactions take place remotely. Fintech innovations have thus turned out to be one of the foundations that make social distancing policies viable.
This is particularly evident in the food industry.
Food delivery platforms, like UberEats and Deliveroo, already relied on mobile payments.
Costa coffee (a UK equivalent to starbucks) went cashless in an attempt to limit the spread of COVID-19.
Domino’s Pizza, among other franchises, announced that it would move to contactless deliveries.
President Donald Trump is said to have discussed plans to keep drive-thru restaurants open during the outbreak. This would also certainly imply exclusively digital payments.
And although doubts remain concerning the extent to which the SARS-CoV-2 virus may, or may not, be transmitted via banknotes and coins, many other businesses have preemptively ceased to accept cash payments.
As the Jodie Kelley – the CEO of the Electronic Transactions Association – put it, in a CNBC interview:
Contactless payments have come up as a new option for consumers who are much more conscious of what they touch.
This increased demand for cashless payments has been a blessing for Fintech firms.
Though it is too early to gage the magnitude of this shift, early signs – notably from China – suggest that mobile payments have become more common during the outbreak.
In China, Alipay announced that it expected to radically expand its services to new sectors – restaurants, cinema bookings, real estate purchases – in an attempt to compete with WeChat.
PayPal has also witnessed an uptick in transactions, though this growth might ultimately be weighed-down by declining economic activity.
In the past, Facebook had revealed plans to offer mobile payments across its platforms – Facebook, WhatsApp, Instagram & Libra. Those plans may not have been politically viable at the time. The COVID-19 could conceivably change this.
In short, the COVID-19 outbreak has increased our reliance on digital payments, as these can both take place remotely and, potentially, limit contamination via banknotes. None of this would have been possible twenty years ago when industry pioneers, such as PayPal, were in their infancy.
3. High speed internet access
Similarly, it goes without saying that none of the above would be possible without the tremendous investments that have been made in broadband infrastructure, most notably by internet service providers. Though these companies have often faced strong criticism from the public, they provide the backbone upon which outbreak-stricken economies can function.
By causing so many activities to move online, the COVID-19 outbreak has put broadband networks to the test. So for, broadband infrastructure around the world has been up to the task. This is partly because the spike in usage has occurred in daytime hours (where network’s capacity is less straine), but also because ISPs traditionally rely on a number of tools to limit peak-time usage.
The biggest increases in usage seem to have occurred in daytime hours. As data from OpenVault illustrates:
Anecdotal data also suggests that, so far, fixed internet providers have not significantly struggled to handle this increased traffic (the same goes for Content Delivery Networks). Not only were these networks already designed to withstand high peaks in demand, but ISPs have, such as Verizon, increased their capacity to avoid potential issues.
For instance, internet speed tests performed using Ookla suggest that average download speeds only marginally decreased, it at all, in locked-down regions, compared to previous levels:
However, the same data suggests that mobile networks have faced slightly larger decreases in performance, though these do not appear to be severe. For instance, contrary to contemporaneous reports, a mobile network outage that occurred in the UK is unlikely to have been caused by a COVID-related surge.
The robustness exhibited by broadband networks is notably due to long-running efforts by ISPs (spurred by competition) to improve download speeds and latency. As one article put it:
For now, cable operators’ and telco providers’ networks are seemingly withstanding the increased demands, which is largely due to the upgrades that they’ve done over the past 10 or so years using technologies such as DOCSIS 3.1 or PON.
Pushed in part by Google Fiber’s launch back in 2012, the large cable operators and telcos, such as AT&T, Verizon, Comcast and Charter Communications, have spent years upgrading their networks to 1-Gig speeds. Prior to those upgrades, cable operators in particular struggled with faster upload speeds, and the slowdown of broadband services during peak usage times, such as after school and in the evenings, as neighborhood nodes became overwhelmed.
This is not without policy ramifications.
For a start, these developments might vindicate antitrust enforcers that allowed mergers that led to higher investments, sometimes at the expense of slight reductions in price competition. This is notably the case for so-called 4 to 3 mergers in the wireless telecommunications industry. As an in-depth literature review by ICLE scholars concludes:
Studies of investment also found that markets with three facilities-based operators had significantly higher levels of investment by individual firms.
This may seem like a trivial problem, but it was totally avoidable. As a result of net neutrality regulation, European authorities and content providers have been forced into an awkward position (likely unfounded) that unnecessarily penalizes those consumers and ISPs who do not face congestion issues (conversely, it lets failing ISPs off the hook and disincentivizes further investments on their part). This is all the more unfortunate that, as argued above, streaming services are essential to locked-down consumers.
Critics may retort that small quality decreases hardly have any impact on consumers. But, if this is indeed the case, then content providers were using up unnecessary amounts of bandwidth before the COVID-19 outbreak (something that is less likely to occur without net neutrality obligations). And if not, then European consumers have indeed been deprived of something they valued. The shoe is thus on the other foot.
These normative considerations aside, the big point is that we can all be thankful to live in an era of high-speed internet.
4. Concluding remarks
Big Tech is rapidly emerging as one of the heroes of the COVID-19 crisis. Companies that were once on the receiving end of daily reproaches – by the press, enforcers, and scholars alike – are gaining renewed appreciation from the public. Times have changed since the early days of these companies – where consumers marvelled at the endless possibilities that their technologies offered. Today we are coming to realize how essential tech companies have become to our daily lives, and how they make society more resilient in the face of fat-tailed events, like pandemics.
The move to a contactless, digital, economy is a critical part of what makes contemporary societies better-equipped to deal with COVID-19. As this post has argued, online delivery, digital entertainment, contactless payments and high speed internet all play a critical role.
To think that we receive some of these services for free…
Last year, Erik Brynjolfsson, Avinash Collins and Felix Eggers published a paper in PNAS, showing that consumers were willing to pay significant sums for online goods they currently receive free of charge. One can only imagine how much larger those sums would be if that same experiment were repeated today.
Even Big Tech’s critics are willing to recognize the huge debt we owe to these companies. As Stephen Levy wrote, in an article titled “Has the Coronavirus Killed the Techlash?”:
Who knew the techlash was susceptible to a virus?
The pandemic does not make any of the complaints about the tech giants less valid. They are still drivers of surveillance capitalism who duck their fair share of taxes and abuse their power in the marketplace. We in the press must still cover them aggressively and skeptically. And we still need a reckoning that protects the privacy of citizens, levels the competitive playing field, and holds these giants to account. But the momentum for that reckoning doesn’t seem sustainable at a moment when, to prop up our diminished lives, we are desperately dependent on what they’ve built. And glad that they built it.
While it is still early to draw policy lessons from the outbreak, one thing seems clear: the COVID-19 pandemic provides yet further evidence that tech policymakers should be extremely careful not to kill the goose that laid the golden egg, by promoting regulations that may thwart innovation (or the opposite).
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Country
Model 1
Model 2
Model 3
Model 4
Price
Rank
Price
Rank
Price
Rank
Price
Rank
Australia
$78.30
28
$82.81
27
$102.63
26
$84.45
23
Austria
$48.04
17
$60.59
15
$73.17
11
$74.02
17
Belgium
$46.82
16
$66.62
21
$75.29
13
$81.09
22
Canada
$69.66
27
$74.99
25
$92.73
24
$76.57
19
Chile
$33.42
8
$73.60
23
$83.81
20
$88.97
25
Czech Republic
$26.83
3
$49.18
6
$69.91
9
$60.49
6
Denmark
$43.46
14
$52.27
8
$69.37
8
$63.85
8
Estonia
$30.65
6
$56.91
12
$81.68
19
$69.06
12
Finland
$35.00
9
$37.95
1
$57.49
2
$51.61
1
France
$30.12
5
$44.04
4
$61.96
4
$54.25
3
Germany
$36.00
12
$53.62
10
$75.09
12
$66.06
11
Greece
$35.38
10
$64.51
19
$80.72
17
$78.66
21
Iceland
$65.78
25
$73.96
24
$94.85
25
$90.39
26
Ireland
$56.79
22
$62.37
16
$76.46
14
$64.83
9
Italy
$29.62
4
$48.00
5
$68.80
7
$59.00
5
Japan
$40.12
13
$53.58
9
$81.47
18
$72.12
15
Latvia
$20.29
1
$42.78
3
$63.05
5
$52.20
2
Luxembourg
$56.32
21
$54.32
11
$76.83
15
$72.51
16
Mexico
$35.58
11
$91.29
29
$120.40
29
$109.64
29
Netherlands
$44.39
15
$63.89
18
$89.51
21
$77.88
20
New Zealand
$59.51
24
$81.42
26
$90.55
22
$76.25
18
Norway
$88.41
29
$71.77
22
$103.98
27
$96.95
27
Portugal
$30.82
7
$58.27
13
$72.83
10
$71.15
14
South Korea
$25.45
2
$42.07
2
$52.01
1
$56.28
4
Spain
$54.95
20
$87.69
28
$115.51
28
$106.53
28
Sweden
$52.48
19
$52.16
7
$61.08
3
$70.41
13
Switzerland
$66.88
26
$65.01
20
$91.15
23
$84.46
24
United Kingdom
$50.77
18
$63.75
17
$79.88
16
$65.44
10
United States
$58.00
23
$59.84
14
$64.75
6
$62.94
7
Average
$46.55
$61.70
$80.24
$73.73
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
Conclusion
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.
In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”).
On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however.
For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.
Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.
Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.
While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.
Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.
Arguing that her proposal is “akin to a pollution tax,” Ms. Kim
writes:
A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air. But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.
It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, TheEconomics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.
With respect to the proposed tax on Microsoft and other
high-tech employers, the fairness argument seems a stretch, and the efficiency
argument outright fails. Let’s consider each.
To achieve fairness by forcing a victimizer to pay for
imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s
view is that Microsoft and its high-paid employees are victimizing (imposing
costs on) incumbent renters and lower-paid homebuyers. But is that so clear?
Microsoft’s desire to employ high-skilled workers, and those
employees’ desire to live near their work, conflicts with incumbent renters’
desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If
Microsoft got its way, incumbent renters and lower paid homebuyers would be worse
off.
But incumbent renters’ and lower-paid homebuyers’ insistence
on low rents and home prices conflicts with the desires of Microsoft, the
high-skilled workers it would like to hire, and local homeowners. If incumbent
renters and lower paid homebuyers got their way and prevented Microsoft from
employing high-wage workers, Microsoft, its potential employees, and local
homeowners would be worse off. Who is the
victim here?
As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.
A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.
When a business engages in some productive activity, it uses
resources (labor, materials, etc.) to produce some sort of valuable output
(e.g., a good or service). In determining what level of productive activity to
engage in (e.g., how many hours to run the factory, etc.), it compares its cost
of engaging in one more unit of activity to the added benefit (revenue) it will
receive from doing so. If its so-called “marginal cost” from the additional activity
is less than or equal to the “marginal benefit” it will receive, it will engage
in the activity; otherwise, it won’t.
When the business is bearing all the costs and benefits of
its actions, this outcome is efficient. The cost of the inputs used in
production are determined by the value they could generate in alternative uses.
(For example, if a flidget producer could create $4 of value from an ounce of
tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.)
If a business finds that continued production generates additional revenue
(reflective of consumers’ subjective valuation of the business’s additional
product) in excess of its added cost (reflective of the value its inputs could
create if deployed toward their next-best use), then making more moves productive
resources to their highest and best uses, enhancing social welfare. This
outcome is “allocatively efficient,” meaning that productive resources have been
allocated in a manner that wrings the greatest possible value from them.
Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others. Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.
Pigou’s idea was to use taxes to prevent such inefficiencies.
If the government were to charge the producer a tax equal to the cost his
activity imposed on others ($1 in the above example), then he would capture all
the marginal benefit and bear all the
marginal cost of his activity. He would thus be motivated to continue his
activity only to the point at which its total marginal benefit equaled its
total marginal cost. The point of a Pigouvian tax, then, is to achieve
allocative efficiency—i.e., to channel productive resources toward their
highest and best ends.
When it comes to the negative externality Ms. Kim has
identified—an increase in housing prices occasioned by high-tech companies’
hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That
is because the external cost at issue here is a “pecuniary” externality, a special
sort of externality that does not generate inefficiency.
A pecuniary externality is one where the adverse third-party
effect consists of an increase in market prices. If that’s the case, the
allocative inefficiency that may justify Pigouvian taxes does not exist. There’s
no inefficiency from the mere fact that buyers pay more. Their loss is perfectly offset by a gain to
sellers, and—here’s the crucial part—the higher prices channel productive
resources toward, not away from,
their highest and best ends. High rent levels, for example, signal to real
estate developers that more resources should be devoted to creating living spaces
within the city. That’s allocatively efficient.
Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.
In the end, Ms. Kim’s pollution tax analogy fails. The efficiency
case for a Pigouvian tax to remedy negative externalities does not apply when,
as here, the externality at issue is pecuniary.
For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.
Over the past few weeks, Truth on the Market has had several posts related to harm reduction policies, with a focus on tobacco, e-cigarettes, and other vapor products:
Harm reduction policies are used to manage a wide range of behaviors including recreational drug use and sexual activity. Needle-exchange programs reduce the spread of infectious diseases among users of heroin and other injected drugs. Opioid replacement therapy substitutes illegal opioids, such as heroin, with a longer acting but less euphoric opioid. Safer sex education and condom distribution in schools are designed to reduce teenage pregnancy and reduce the spread of sexually transmitted infections. None of these harm reduction policies stop the risky behavior, nor do the policies eliminate the potential for harm. Nevertheless, the policies intend to reduce the expected harm.
Carrie Wade, Director of Harm Reduction Policy and Senior Fellow at the R Street Institute, draws a parallel between opiate harm reduction strategies and potential policies related to tobacco harm reduction. She notes that with successful one-year quit rates hovering around 10 percent, harm reduction strategies offer ways to transition more smokers off the most dangerous nicotine delivery device: the combustible cigarette.
Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Use of non-combustible nicotine delivery systems, such as e-cigarettes and smokeless tobacco generally are considered to be significantly less harmful than smoking cigarettes. UK government agency Public Health England has concluded that e-cigarettes are around 95 percent less harmful than combustible cigarettes.
In the New England Journal of Medicine, Fairchild, et al. (2018) identify a continuum of potential policies regarding the regulation of vapor products, such as e-cigarettes, show in the figure below. They note that the most restrictive policies would effectively eliminate e-cigarettes as a viable alternative to smoking, while the most permissive may promote e-cigarette usage and potentially encourage young people—who would not do so otherwise—to take-up e-cigarettes. In between these extremes are policies that may discourage young people from initiating use of e-cigarettes, while encouraging current smokers to switch to less harmful vapor products.
International Center for Law & Economics chief economist, Eric Fruits, notes in his blog post that more than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes. His post is based on a recently released ICLE white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda.
Under a harm reduction principle, Fruits argues that e-cigarettes and other vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking.
In contrast to harm reduction principles, the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.
On the one hand, some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. On the other hand, Dan Mitchell, co-founder of the Center for Freedom and Prosperity, points out that some politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.
Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.
Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects and broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration. These considerations, however, are frustrated by unreliable and wildly divergent empirical estimates of consumer demand in the face of changing prices and/or rising taxes.
Along the lines of uncertain—if not surprising—impacts Fritz Laux, professor of economics at Northeastern State University, provides an explanation of why smoke-free air laws have not been found to adversely affect revenues or employment in the restaurant and hospitality industries.
He argues that social norms regarding smoking in restaurants have changed to the point that many smokers themselves support bans on smoking in restaurants. In this way, he hypothesizes, smoke-free air laws do not impose a significant constraint on consumer behavior or business activity. We might likewise infer, by extension, that policies which do not prohibit vaping in public spaces (leaving such decisions to the discretion of business owners and managers) could encourage switching by people who otherwise would have to exit buildings in order to vape or smoke—without adversely affecting businesses.
Principles of harm reduction recognize that every policy proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm or in an overt pursuit of tax revenues.
In an ideal world, the discussion and debate about how (or if) to tax e-cigarettes, heat-not-burn, and other tobacco harm-reduction products would be guided by science. Policy makers would confer with experts, analyze evidence, and craft prudent and sensible laws and regulations.
In the real world, however, politicians are guided by other factors.
There are two things to understand, both of which are based on my conversations with policy staff in Washington and elsewhere.
First, this is a battle over tax revenue. Politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.
This is very much akin to the concern that electric cars and fuel-efficient cars will lead to a loss of money from excise taxes on gasoline.
In the case of fuel taxes, politicians are anxiously looking at other sources of revenue, such as miles-driven levies. Their main goal is to maintain – or preferably increase – the amount of money that is diverted to the redistributive state so that politicians can reward various interest groups.
In the case of tobacco, a reduction in the number of smokers (or the tax-driven propensity of smokers to seek out black-market cigarettes) is leading politicians to concoct new schemes for taxing e-cigarettes and related non-combustible products.
Second, this is a quasi-ideological fight. Not about capitalism versus socialism, or big government versus small government. It’s basically a fight over paternalism, or a battle over goals.
For all intents and purposes, the question is whether lawmakers should seek to simultaneously discourage both tobacco use and vaping because both carry some risk (and perhaps because both are considered vices for the lower classes)? Or should they welcome vaping since it leads to harm reduction as smokers shift to a dramatically safer way of consuming nicotine?
In statistics, researchers presumably always recognize the dangers of certain types of mistakes, known as Type I errors (also known as a “false positive”) and Type II errors (also known as a “false negative”).
How does this relate to smoking, vaping, and taxes?
Simply stated, both sides of the fight are focused on a key goal and secondary issues are pushed aside. In other words, tradeoffs are being ignored.
The advocates of high taxes on e-cigarettes and other non-combustible products are fixated on the possibility that vaping will entice some people into the market. Maybe vaping wil even act as a gateway to smoking. So, they want high taxes on vaping, akin to high taxes on tobacco, even though the net result is that this leads many smokers to stick with cigarettes instead of making a switch to less harmful products.
On the other side of the debate are those focused on overall public health. They see emerging non-combustible products as very effective ways of promoting harm reduction. Is it possible that e-cigarettes may be tempting to some people who otherwise would never try tobacco? Yes, that’s possible, but it’s easily offset by the very large benefits that accrue as smokers become vapers.
For all intents and purposes, the fight over the taxation of vaping is similar to other ideological fights.
The old joke in Washington is that a conservative is someone who will jail 99 innocent people in order to put one crook in prison and a liberal is someone who will free 99 guilty people to prevent one innocent person from being convicted (or, if you prefer, a conservative will deny 99 poor people to catch one welfare fraudster and a liberal will line the pockets of 99 fraudsters to make sure one genuinely poor person gets money).
The vaping fight hasn’t quite reached this stage, but the battle lines are very familiar. At some point in the future, observers may joke that one side is willing to accept more smoking if one teenager forgoes vaping while the other side is willing to have lots of vapers if it means one less smoker.
Having explained the real drivers of this debate, I’ll close by injecting my two cents and explaining why the paternalists are wrong. But rather than focus on libertarian-type arguments about personal liberty, I’ll rely on three points, all of which are based on conventional cost-benefit analysis and the sensible approach to excise taxation.
First, tax policy should focus on incentivizing a switch and not punishing those who chose a less harmful products. The goal should be harm reduction rather than revenue maximization.
Second, low tax burdens also translate into lower long-run spending burdens because a shift to vaping means a reduction in overall healthcare costs related to smoking cigarettes.
Third, it makes no sense to impose punitive “sin taxes” on behaviors that are much less, well, sinful. There’s a big difference in the health and fiscal impact of cigarettes compared to the alternatives.
One final point is that this issue has a reverse-class-warfare component. Anti-smoking activists generally have succeeded in stigmatizing cigarette consumption and most smokers are now disproportionately from the lower-income community. For better (harm reduction) or worse (elitism), low-income smokers are generally treated with disdain for their lifestyle choices.
It is not an explicit policy, but that disdain now seems to extend to any form of nicotine consumption, even though the health effects of vaping are vastly lower.
On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.
At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.
The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:
It has been engaging in “exclusionary conduct” that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.
Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.
The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:
[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.
Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.
It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.
(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)
On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.
In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):
The Telecommunications Industry Association (TIA) and
The Alliance for Telecommunications Industry Solutions (ATIS).
These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.
The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.” A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.
Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.
The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.
Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.
In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.
These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.
An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”
It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”
Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.
The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”
Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:
[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.
Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.
The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.As discussed, the policies of ATIS and TIA are unclear on this.
In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidenceon what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.