Archives For

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

This has been a big year for business in the courts. A U.S. district court approved the AT&T-Time Warner merger, the Supreme Court upheld Amex’s agreements with merchants, and a circuit court pushed back on the Federal Trade Commission’s vague and heavy handed policing of companies’ consumer data safeguards.

These three decisions mark a new era in the intersection of law and economics.

AT&T-Time Warner

AT&T-Time Warner is a vertical merger, a combination of firms with a buyer-seller relationship. Time Warner creates and broadcasts content via outlets such as HBO, CNN, and TNT. AT&T distributes content via services such as DirecTV.

Economists see little risk to competition from vertical mergers, although there are some idiosyncratic circumstances in which competition could be harmed. Nevertheless, the U.S. Department of Justice went to court to block the merger.

The last time the goverment sued to block a merger was more than 40 years ago, and the government lost. Since then, the government relied on the threat of litigation to extract settlements from the merging parties. For example, in the 1996 merger between Time Warner and Turner, the FTC required limits on how the new company could bundle HBO with less desirable channels and eliminated agreements that allowed TCI (a cable company that partially owned Turner) to carry Turner channels at preferential rates.

With AT&T-Time Warner, the government took a big risk, and lost. It was a big risk because (1) it’s a vertical merger, and (2) the case against the merger was weak. The government’s expert argued consumers would face an extra 45 cents a month on their cable bills if the merger went through, but under cross-examination, conceded it might be as little as 13 cents a month. That’s a big difference and raised big questions about the reliability of the expert’s model.

Judge Richard J. Leon’s 170+ page ruling agreed that the government’s case was weak and its expert was not credible. While it’s easy to cheer a victory of big business over big government, the real victory was the judge’s heavy reliance on facts, data, and analysis rather than speculation over the potential for consumer harm. That’s a big deal and may make the way for more vertical mergers.

Ohio v. American Express

The Supreme Court’s ruling in Amex may seem obscure. The court backed American Express Co.’s policy of preventing retailers from offering customers incentives to pay with cheaper cards.

Amex charges higher fees to merchants than do other cards, such as Visa, MasterCard, and Discover. Amex cardholders also have higher incomes and tend to spend more at stores than those associated with other networks. And, Amex offers its cardholders better benefits, services, and rewards than the other cards. Merchants don’t like Amex because of the higher fees, customers prefer Amex because of the card’s perks.

Amex, and other card companies, operate in what is known as a two-sided market. Put simply, they have two sets of customers: merchants who pay swipe fees, and consumers who pay fees and interest.

Part of Amex’s agreement with merchants is an “anti-steering” provision that bars merchants from offering discounts for using non-Amex cards. The U.S. Justice Department and a group of states sued the company, alleging the Amex rules limited merchants’ ability to reduce their costs from accepting credit cards, which meant higher retail prices. Amex argued that the higher prices charged to merchants were kicked back to its cardholders in the form of more and better perks.

The Supreme Court found that the Justice Department and states focused exclusively on one side (merchant fees) of the two-sided market. The courts says the government can’t meet its burden by showing some effect on some part of the market. Instead, they must demonstrate, “increased cost of credit card transactions … reduced number of credit card transactions, or otherwise stifled competition.” The government could not prove any of those things.

We live in a world two-sided markets. Amazon may be the biggest two-sided market in the history of the world, linking buyers and sellers. Smartphones such as iPhones and Android devices are two-sided markets, linking consumers with app developers. The Supreme Court’s ruling in Amex sets a standard for how antitrust law should treat the economics of two-sided markets.

LabMD

LabMD is another matter that seems obscure, but could have big impacts on the administrative state.

Since the early 2000s, the FTC has brought charges against more than 150 companies alleging they had bad security or privacy practices. LabMD was one of them, when its computer system was compromised by professional hackers in 2008. The FTC claimed that LabMD’s failure to adequately protect customer data was an “unfair” business practice.

Challenging the FTC can get very expensive and the agency used the threat of litigation to secure settlements from dozens of companies. It then used those settlements to convince everyone else that those settlements constituted binding law and enforceable security standards.

Because no one ever forced the FTC to defend what it was doing in court, the FTC’s assertion of legal authority became a self-fulfilling prophecy. LabMD, however, chose to challege the FTC. The fight drove LabMD out of business, but public interest law firm Cause of Action and lawyers at Ropes & Gray took the case on a pro bono basis.

The 11th Circuit Court of Appeals ruled the FTC’s approach to developing security standards violates basic principles of due process. The court said the FTC’s basic approach—in which the FTC tries to improve general security practices by suing companies that experience security breaches—violates the basic legal principle that the government can’t punish someone for conduct that the government hasn’t previously explained is problematic.

My colleague at ICLE observes the lesson to learn from LabMD isn’t about the illegitimacy of the FTC’s approach to internet privacy and security. Instead, it says legality of the administrative state is premised on courts placing a check on abusive regulators.

The lessons learned from these three recent cases reflect a profound shift in thinkging about the laws governing economic activity:

  • AT&T-Time Warner indicates that facts matter. Mere speculation of potential harms will not satisfy the court.
  • Amex highlights the growing role two-sided markets play in our economy and provides framework for evaluating competition in these markets.
  • LabMD is a small step in reining in the administrative state. Regulations must be scrutinized before they are imposed and enforced.

In some ways none of these decisions are revolutionary. Instead, they reflect an evolution toward greater transparency in how the law is to be applied and greater scrutiny over how the regulations are imposed.

 

Big is bad, part 1: Kafka, Coase, and Brandeis walk into a bar … There’s a quip in a well-known textbook that Nobel laureate Ronald Coase said he’d grown weary of antitrust because when prices went up, the judges said it was monopoly; when the prices went down, they said it was predatory pricing; and when they stayed the same, they said it was tacit collusion. ICLE’s Geoffrey Manne and Gus Hurwitz worry that with the rise of the neo-Brandeisians, not much has changed since Coase’s time:

[C]ompetition, on its face, is virtually indistinguishable from anticompetitive behavior. Every firm strives to undercut its rivals, to put its rivals out of business, to increase its rivals’ costs, or to steal its rivals’ customers. The consumer welfare standard provides courts with a concrete mechanism for distinguishing between good and bad conduct, based not on the effect on rival firms but on the effect on consumers. Absent such a standard, any firm could potentially be deemed to violate the antitrust laws for any act it undertakes that could impede its competitors.

Big is bad, part 2. A working paper published by researchers from Denmark and the University of California at Berkeley suggest that companies such as Google, Apple, Facebook, and Nike are taking advantage of so-called “tax havens” to cause billions of dollars of income go “missing.” There’s a lot of mumbo jumbo in this one, but it’s getting lots of attention.

We show theoretically and empirically that in the current international tax system, tax authorities of high-tax countries do not have incentives to combat profit shifting to tax havens. They instead focus their enforcement effort on relocating profits booked in other high-tax places—in effect stealing revenue from each other.

Big is bad, part 3: Can any country survive with debt-to-GDP of more than 100 percent? Apparently, the answer is “yes.” The U.K. went 80 years, from 1779 to 1858. Then, it went 47 years from 1916 to 1962. Tim Harford has a fascinating story about an effort to clear the country’s debt in that second run.

In 1928, an anonymous donor resolved to clear the UK’s national debt and gave £500,000 with that end in mind. It was a tidy sum — almost £30m at today’s prices — but not nearly enough to pay off the debt. So it sat in trust, accumulating interest, for nearly a century.

How do you make a small fortune? Begin with a big one. A lesson from Johnny Depp.

Will we ever stop debating the Trolley Problem? Apparently the answer is “no.” Also, TIL there’s a field of research that relies on “notions.”

For so long, moral psychology has relied on the notion that you can extrapolate from people’s decisions in hypothetical thought experiments to infer something meaningful about how they would behave morally in the real world. These new findings challenge that core assumption of the field.

 

The week that was on Truth on the Market

LabMD.

[T]argets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

Google Android.

Thus, had Google opted instead to create a separate walled garden of its own on the Apple model, everything it had done would have otherwise been fine. This means that Google is now subject to an antitrust investigation for attempting to develop a more open platform.

AT&T-Time Warner. First this:

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

Then this:

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content.

 

 

AT&T’s merger with Time Warner has lead to one of the most important, but least interesting, antitrust trials in recent history.

The merger itself is somewhat unimportant to consumers. It’s about a close to a “pure” vertical merger as we can get in today’s world and would not lead to a measurable increase in prices paid by consumers. At the same time, Richard J. Leon’s decision to approve the merger may have sent a signal regarding how the anticipated Fox-Disney (or Comcast), CVS-Aetna, and Cigna-Express Scripts mergers might proceed.

Judge Leon of the United States District Court in Washington, said the U.S. Department of Justice had not proved that AT&T’s acquisition of Time Warner would lead to fewer choices for consumers and higher prices for television and internet services.

As shown in the figure below, there is virtually no overlap in services provided by Time Warner (content creation and broadcasting) and AT&T (content distribution). We say “virtually” because, through it’s ownership of DirecTV, AT&T has an ownership stake in several channels such as the Game Show Network, the MLB Network, and Root Sports. So, not a “pure” vertical merger, but pretty close. Besides no one seems to really care about GSN, MLB, or Root.

Infographic: What's at Stake in the Proposed AT&T - Time Warner Merger | Statista

The merger trial was one of the least interesting because the government’s case opposing the merger was so weak.

The Justice Department’s economic expert, University of California, Berkeley, professor Carl Shapiro, argued the merger would harm consumers and competition in three ways:

  1. AT&T would raise the price of content to other cable companies, driving up their costs which would be passed on consumers.
  2. Across more than 1,000 subscription television markets, AT&T could benefit by drawing customers away from rival content distributors in the event of a “blackout,” in which the distributor chooses not to carry Time Warner content over a pricing dispute. In addition, AT&T could also use its control over Time Warner content to retain customers by discouraging consumers from switching to providers that don’t carry the Time Warner content. Those two factors, according to Shapiro, could cause rival cable companies to lose between 9 and 14 percent of their subscribers over the long term.
  3. AT&T and competitor Comcast could coordinate to restrict access to popular Time Warner and NBC content in ways that could stifle competition from online cable alternatives such as Dish Network’s Sling TV or Sony’s PlayStation Vue. Even tacit coordination of this type would impair consumer choices, Shapiro opined.

Price increases and blackouts

Shapiro initially indicated the merger would cause consumers to pay an additional $436 million year, which amounts to an average of 45 cents a month per customer, or a 0.4 percent increase. At trial, he testified the amount might be closer to 27 cents a month and conceded it could be a low as 13 cents a month.

The government’s “blackout” arguments seemed to get lost in the shifting sands of shifting survey results. Blackouts mattered, according to Shapiro, because “Even though they don’t happen very much, that’s the key to leverage.” His testimony on the potential for price hikes relied heavily on a study commissioned by Charter Communications Inc., which opposes the merger. Stefan Bewley, a director at consulting firm Altman Vilandrie & Co., which produced the study, testified the report predicted Charter would lose 9 percent of its subscribers if it lost access to Turner programming.

Under cross-examination by AT&T’s lawyer, Bewley acknowledged what was described as a “final” version of the study presented to Charter in April last year put the subscriber loss estimate at 5 percent. When confronted with his own emails about the change to 9 percent, Bewley said he agreed to the update after meeting with Charter. At the time of the change from 5 percent to 9 percent, Charter was discussing its opposition to the merger with the Justice Department.

Bewley noted that the change occurred because he saw that some of the figures his team had gathered about Turner networks were outliers, with a range of subcriber losses of 5 percent on the low end and 14 percent on the high end. He indicated his team came up with a “weighted average” of 9 percent.

This 5/9/14 percent distinction seems to be critical to the government’s claim the merger would raise consumer prices. Referring to Shapiro’s analysis, AT&T-Time Warner’s lead counsel, Daniel Petrocelli, asked Bewley: “Are you aware that if he’d used 5 percent there would have been a price increase of zero?” Bewley said he was not aware.

At trial, AT&T and Turner executives testified that they couldn’t credibly threaten to withhold Turner programming from rivals because the networks’ profitability depends on wide distribution. In addition, one of AT&T’s expert witnesses, University of California, Berkeley business and economics professor Michael Katz, testified about what he said were the benefits of AT&T’s offer to use “baseball style” arbitration with rival pay TV distributors if the two sides couldn’t agree on what fees to pay for Time Warner’s Turner networks. With baseball style arbitration, both sides submit their final offer to an arbitrator, who determines which of the two offers is most appropriate.

Under the terms of the arbitration offer, AT&T has agreed not to black out its networks for the duration of negotiations with distributors. Dennis Carlton, an economics professor at the University of Chicago, said Shapiro’s model was unreliable because he didn’t account for that. Shapiro conceded he did not factor that into his study, saying that he would need to use an entirely different model to study how the arbitration agreement would affect the merger.

Coordination with Comcast/NBCUniversal

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

At trial, the Justice Department seemed to abandon any claim that the merged company would unilaterally restrict access to online “virtual MVPDs.” The government’s case, made by its expert Shapiro, ended up being there would be a “risk” and “danger” that AT&T and Comcast would “coordinate” to withhold programming in a way to harm emerging online multichannel distributors. However, under cross examination, he conceded that his opinions were not based on a “quantifiable model.” Shapiro testified that he had no opinion whether the odds of such coordination would be greater than 1 percent.

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content. Emerging online multichannel distributors pitch their offerings as “skinny bundles” with a limited selection of the more popular channels. By forcing these providers to take more channels, the government argued, the skinny bundle business model is undermined in a version of raising rivals costs. This theory did not get much play at trial, but seems to suggest the government was trying to have its cake and eat it, too.

Except in this case, as with much of the government’s case in this matter, the cake was not completely baked.

 

Weekend Reads

Eric Fruits —  8 June 2018

Innovation dies in darkness. Well, actually, it thrives in the light, according to this new research:

We find that after a patent library opens, local patenting increases by 17% relative to control regions that have Federal Depository Libraries. … [T]]he library boost ceases to be present after the introduction of the Internet. We find that library opening is also associated with an increase in local business formation and job creation [especially for small business -ed.], which suggests that the impact of libraries is not limited to patenting outcomes.

Patent-Libraries

Don’t drink the Kool-Aid of bad data. Have a SPRITE. From the article published by self-described “data thugs“.

Scientific publications have not traditionally been accompanied by data, either during the peer review process or when published. Concern has arisen that the literature in many fields may contain inaccuracies or errors that cannot be detected without inspecting the original data. Here, we introduce SPRITE (Sample Parameter Reconstruction via Interative TEchniques), a heuristic method for reconstructing plausible samples from descriptive statistics of granular data, allowing reviewers, editors, readers, and future researchers to gain insights into the possible distributions of item values in the original data set.

Gig economy, it’s a good thing: 6.9% of all workers are independent contractors; 79% of them prefer their arrangement over a traditional job.

Gig economy, it’s a bad thing. Maybe.

[C]ensus divisions with relatively weak wage inflation also tend to have more “low-wage” informal FTE—that is, more hours of informal work performed at a wage that is less than formal pay.

Broetry. It’s a LinkedIn thing. I don’t get it.

 

 

Weekend reads

Eric Fruits —  1 June 2018

Good government dies in the darkness. This article is getting a lot of attention on Wonk Twitter and what’s left of the blogosphere. From the abstract:

We examine the effect of local newspaper closures on public finance for local governments. Following a newspaper closure, we find municipal borrowing costs increase by 5 to 11 basis points in the long run …. [T]hese results are not being driven by deteriorating local economic conditions. The loss of monitoring that results from newspaper closures is associated with increased government inefficiencies, including higher likelihoods of costly advance refundings and negotiated issues, and higher government wages, employees, and tax revenues.

What the hell happened at GE? This guy blames Jeff Immelt’s buy-high/sell-low strategy. I blame Jack Welch.

Academic writing is terrible. Science journalist Anna Clemens wants to change that. (Plus she quotes one of my grad school professors, Paul Zak Here’s what Clemens says about turning your research into a story:

But – just as with any Hollywood success in the box office – your paper will not become a page-turner, if you don’t introduce an element of tension now. Your readers want to know what problem you are solving here. So, tell them what gap in the literature needs to be filled, why method X isn’t good enough to solve Y, or what still isn’t known about mechanism Z. To introduce the tension, words such as “however”, “despite”, “nevertheless”, “but”, “although” are your best friends. But don’t fool your readers with general statements, phrase the problem precisely.

Write for the busy reader. While you’re writing your next book, paper, or op-ed, check out what the readability robots think of your writing.

They tell me I’ll get more hits if I mention Bitcoin and blockchain. Um, OK. Here goes. The Seattle Times reports on the mind-blowing amount of power cryptocurrency miners are trying to buy in the electricity-rich Pacific Northwest:

In one case this winter, miners from China landed their private jet at the local airport, drove a rental car to the visitor center at the Rocky Reach Dam, just north of Wenatchee, and, according to Chelan County PUD officials, politely asked to see the “dam master because we want to buy some electricity.”

You will never find a more wretched hive of scum and villainy. The Wild West of regulating cryptocurrencies:

The government must show that the trader intended to artificially affect the price. The Federal District Court in Manhattan once explained that “entering into a legitimate transaction knowing that it will distort the market is not manipulation — only intent, not knowledge, can transform a legitimate transaction into manipulation.”

Tyler Cowen on what’s wrong with the Internet. Hint: It’s you.

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).  Follow experts and people of substance, not people who seek to lower the status of others.

If that fails, “mute words” is your friend. Muting a few terms made my Twitter experience significantly more enjoyable and informative.

 

mute

If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.

Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.

In 2017, he published “The Power of Bias in Economics Research.” He recently talked to Russ Roberts on the EconTalk podcast about his research and what it means for economics.

He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.

What is bias?

We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.

In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.

Publication bias

Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.

Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.

The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).

“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.

On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.

Low power

A study with low statistical power has a reduced chance of detecting a true effect.

Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.

Type1-Type2-Error

An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.

Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.

By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.

Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.

In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.

Now, we’re getting to power.

Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.

By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.

Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:

  • If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
  • If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.

In other words, a low power test is more likely to convict the innocent than a high power test.

In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.

MinimumWageResearchFunnelGraph

Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.

(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)

Is economics bogus?

It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.

For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:

[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.

In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”

 

“Houston, we have a problem.” It’s the most famous line from Apollo 13 and perhaps how most Republicans are feeling about their plans to repeal and replace Obamacare.

As repeal and replace has given way to tinker and punt, Congress should take a lesson from one of my favorite scenes from Apollo 13.

“We gotta find a way to make this, fit into the hole for this, using nothing but that.”

Let’s look at a way Congress can get rid of the individual mandate, lower prices, cover pre-existing conditions, and provide universal coverage, using the box of tools that we already have on the table.

Some ground rules

First ground rule: (Near) universal access to health insurance. It’s pretty clear that many, if not most Americans, believe that everyone should have health insurance. Some go so far as to call it a “basic human right.” This may be one of the biggest shifts in U.S. public opinion over time.

Second ground rule: Everything has a price, there’s no free lunch. If you want to add another essential benefit, premiums will go up. If you want community rating, young healthy people are going to subsidize older sicker people. If you want a lower deductible, you’ll pay a higher premium, as shown in the figure below all the plans available on Oregon’s ACA exchange in 2017. It shows that a $1,000 decrease in deductible is associated with almost $500 a year in additional premium payments. There’s no free lunch.

ACA-Oregon-Exchange-2017

Third ground rule: No new programs, no radical departures. Maybe Singapore has a better health insurance system. Maybe Canada’s is better. Switching to either system would be a radical departure from the tools we have to work with. This is America. This is Apollo 13. We gotta find a way to make this, fit into the hole for this, using nothing but that.

Private insurance

Employer and individual mandates: Gone. This would be a substantial change from the ACA, but is written into the Senate health insurance bill. The individual mandate is perhaps the most hated part of the ACA, but it was also the most important part Obamacare. Without the coverage mandate, much of the ACA falls apart, as we are seeing now.

Community rating, mandated benefits (aka “minimum essential benefit”), and pre-existing conditions. Sen. Ted Cruz has a brilliantly simple idea: As long as a health plan offers at least one ACA-compliant plan in a state, the plan would also be allowed to offer non-Obamacare-compliant plans in that state. In other words, every state would have at least one plan that checks all the Obamacare boxes of community rating, minimum essential benefits, and pre-existing conditions. If you like Obamacare, you can keep Obamacare. In addition, there could be hundreds of other plans for which consumers can pick each person’s unique situation of age, health status, and ability/willingness to pay. A single healthy 27-year-old would likely choose a plan that’s very different from a plan chosen by a family of four with 40-something parents and school aged children.

Allow—but don’t require—insurance to be bought and sold across state lines. I don’t know if this a big deal or not. Some folks on the right think this could be a panacea. Some folks on the left think this is terrible and would never work. Let’s find out. Some say insurance companies don’t want to sell policies across state lines. Some will, some won’t. Let’s find out, but it shouldn’t be illegal. No one is worse off by loosening a constraint.

Tax deduction for insurance premiums. Keep insurance premiums as a deductible expense for business: No change from current law. In addition, make insurance premiums deductible on individual taxes. This is a not-so-radical change from current law that allows deductions for medical expenses. If someone has employer-provided insurance, the business would be able deduct the share the company pays and the worker would be able to deduct the employee share of the premium from his or her personal taxes. Sure the deduction will reduce tax revenues, but the increase in private insurance coverage would reduce the costs of Medicaid and charity care.

These straightforward changes would preserve one or more ACA-compliant plan for those who want to pay Obamacare’s “silver prices,” allow for consumer choice across other plans, and result in premiums that more closely aligned with benefits chosen by consumers. Allowing individuals to deduct health insurance premiums is also a crucial step in fostering insurance portability.

Medicaid

Even with the changes in the private market, some consumers will find that they can’t afford or don’t want to pay the market price for private insurance. These people would automatically get moved into Medicaid. Those in poverty (or some X% of the poverty rate) would pay nothing and everyone else would be charged a “premium” based on ability to pay. A single mother in poverty would pay nothing for Medicaid coverage, but Elon Musk (if he chose this option) would pay the full price. A middle class family would pay something in between free and full-price. Yes, this is a pretty wide divergence from the original intent of Medicaid, but it’s a relatively modest change from the ACA’s expansion.

While the individual mandate goes away, anyone who does not buy insurance in the private market or is not covered by Medicare will be “mandated” to have Medicaid coverage. At the same time, it preserves consumer choice. That is, consumers have a choice of buying an ACA compliant plan, one of the hundreds of other private plans offered throughout the states, or enrolling in Medicaid.

Would the Medicaid rolls explode? Who knows?

The Census Bureau reports that 15 percent of adults and 40 percent of children currently are enrolled in Medicaid. Research published in the New England Journal of Medicine finds that 44 percent of people who were enrolled in the Medicaid under Obamacare qualified for Medicaid before the ACA.

With low cost private insurance alternatives to Medicaid, some consumers would likely choose the private plans over Medicaid coverage. Also, if Medicaid premiums increased with incomes, able-bodied and working adults would likely shift out of Medicaid to private coverage as the government plan loses its cost-competitiveness.

The cost sharing of income-based premiums means that Medicaid would become partially self supporting.

Opponents of Medicaid expansion claim that the program provides inferior service: fewer providers, lower quality, worse outcomes. If that’s true, then that’s a feature, not a bug. If consumers have to pay for their government insurance and that coverage is inferior, then consumers have an incentive to exit the Medicaid market and enter the private market. Medicaid becomes the insurer of last resort that it was intended to be.

A win-win

The coverage problem is solved. Every American would have health insurance.

Consumer choice is expanded. By allowing non-ACA-compliant plans, consumers can choose the insurance that fits their unique situation.

The individual mandate penalty is gone. Those who choose not to buy insurance would get placed into Medicaid. Higher income individuals would pay a portion of the Medicaid costs, but this isn’t a penalty for having no insurance, it’s the price of having insurance.

The pre-existing conditions problem is solved. Americans with pre-existing conditions would have a choice of at least two insurance options: At least one ACA-compliant plan in the private market and Medicaid.

This isn’t a perfect solution, it may not even be a good solution, but it’s a solution that’s better than what we’ve got and better than what Congress has come up with so far. And, it works with the box of tools that’s already been dumped on the table.

On July 1, the minimum wage will spike in several cities and states across the country. Portland, Oregon’s minimum wage will rise by $1.50 to $11.25 an hour. Los Angeles will also hike its minimum wage by $1.50 to $12 an hour. Recent research shows that these hikes will make low wage workers poorer.

A study supported and funded in part by the Seattle city government, was released this week, along with an NBER paper evaluating Seattle’s minimum wage increase to $13 an hour. The papers find that the increase to $13 an hour had significant negative impacts on employment and led to lower incomes for minimum wage workers.

The study is the first study of a very high minimum wage for a city. During the study period, Seattle’s minimum wage increased from what had been the nation’s highest state minimum wage to an even higher level. It is also unique in its use of administrative data that has much more detail than is usually available to economics researchers.

Conclusions from the research focusing on Seattle’s increase to $13 an hour are clear: The policy harms those it was designed to help.

  • A loss of more than 5,000 jobs and a 9 percent reduction in hours worked by those who retained their jobs.
  • Low-wage workers lost an average of $125 per month. The minimum wage has always been a terrible way to reduce poverty. In 2015 and 2016, I presented analysis to the Oregon Legislature indicating that incomes would decline with a steep increase in the minimum wage. The Seattle study provides evidence backing up that forecast.
  • Minimum wage supporters point to research from the 1990s that made headlines with its claims that minimum wage increases had no impact on restaurant employment. The authors of the Seattle study were able to replicate the results of these papers by using their own data and imposing the same limitations that the earlier researchers had faced. The Seattle study shows that those earlier papers’ findings were likely driven by their approach and data limitations. This is a big deal, and a novel research approach that gives strength to the Seattle study’s results.

Some inside baseball.

The Seattle Minimum Wage Study was supported and funded in part by the Seattle city government. It’s rare that policy makers go through any effort to measure the effectiveness of their policies, so Seattle should get some points for transparency.

Or not so transparent: The mayor of Seattle commissioned another study, by an advocacy group at Berkeley whose previous work on the minimum wage is uniformly in favor of hiking the minimum wage (they testified before the Oregon Legislature to cheerlead the state’s minimum wage increase). It should come as no surprise that the Berkeley group released its report several days before the city’s “official” study came out.

You might think to yourself, “OK, that’s Seattle. Seattle is different.”

But, maybe Seattle is not that different. In fact, maybe the negative impacts of high minimum wages are universal, as seen in another study that came out this week, this time from Denmark.

In Denmark the minimum wage jumps up by 40 percent when a worker turns 18. The Danish researchers found that this steep increase was associated with employment dropping by one-third, as seen in the chart below from the paper.

3564_KREINER-Fig1

Let’s look at what’s going to happen in Oregon. The state’s employment department estimates that about 301,000 jobs will be affected by the rate increase. With employment of almost 1.8 million, that means one in six workers will be affected by the steep hikes going into effect on July 1. That’s a big piece of the work force. By way of comparison, in the past when the minimum wage would increase by five or ten cents a year, only about six percent of the workforce was affected.

This is going to disproportionately affect youth employment. As noted in my testimony to the legislature, unemployment for Oregonians age 16 to 19 is 8.5 percentage points higher than the national average. This was not always the case. In the early 1990s, Oregon’s youth had roughly the same rate of unemployment as the U.S. as a whole. Then, as Oregon’s minimum wage rose relative to the federal minimum wage, Oregon’s youth unemployment worsened. Just this week, Multnomah County made a desperate plea for businesses to hire more youth as summer interns.

It has been suggested Oregon youth have traded education for work experience—in essence, they have opted to stay in high school or enroll in higher education instead of entering the workforce. The figure below shows, however, that youth unemployment has increased for both those enrolled in school and those who are not enrolled in school. The figure debunks the notion that education and employment are substitutes. In fact, the large number of students seeking work demonstrates many youth want employment while they further their education.

OregonYouthUnemployment

None of these results should be surprising. Minimum wage research is more than a hundred years old. Aside from the “mans bites dog” research from the 1990s, economists were broadly in agreement that higher minimum wages would be associated with reduced employment, especially among youth. The research published this week is groundbreaking in its data and methodology. At the same time, the results are unsurprising to anyone with any understanding of economics or experience running a business.