Archives For behavioral economics

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.

Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.

In 2017, he published “The Power of Bias in Economics Research.” He recently talked to Russ Roberts on the EconTalk podcast about his research and what it means for economics.

He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.

What is bias?

We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.

In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.

Publication bias

Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.

Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.

The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).

“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.

On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.

Low power

A study with low statistical power has a reduced chance of detecting a true effect.

Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.

Type1-Type2-Error

An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.

Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.

By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.

Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.

In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.

Now, we’re getting to power.

Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.

By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.

Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:

  • If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
  • If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.

In other words, a low power test is more likely to convict the innocent than a high power test.

In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.

MinimumWageResearchFunnelGraph

Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.

(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)

Is economics bogus?

It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.

For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:

[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.

In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”

 

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

My paper with Judge Douglas H. Ginsburg (D.C. Circuit; NYU Law), Behavioral Law & Economics: Its Origins, Fatal Flaws, and Implications for Liberty, is posted to SSRN and now published in the Northwestern Law Review.

Here is the abstract:

Behavioral economics combines economics and psychology to produce a body of evidence that individual choice behavior departs from that predicted by neoclassical economics in a number of decision-making situations. Emerging close on the heels of behavioral economics over the past thirty years has been the “behavioral law and economics” movement and its philosophical foundation — so-called “libertarian paternalism.” Even the least paternalistic version of behavioral law and economics makes two central claims about government regulation of seemingly irrational behavior: (1) the behavioral regulatory approach, by manipulating the way in which choices are framed for consumers, will increase welfare as measured by each individual’s own preferences and (2) a central planner can and will implement the behavioral law and economics policy program in a manner that respects liberty and does not limit the choices available to individuals. This Article draws attention to the second and less scrutinized of the behaviorists’ claims, viz., that behavioral law and economics poses no significant threat to liberty and individual autonomy. The behaviorists’ libertarian claims fail on their own terms. So long as behavioral law and economics continues to ignore the value to economic welfare and individual liberty of leaving individuals the freedom to choose and hence to err in making important decisions, “libertarian paternalism” will not only fail to fulfill its promise of increasing welfare while doing no harm to liberty, it will pose a significant risk of reducing both.

Download here.

 

From the WSJ:

White House regulatory chief Cass Sunstein is leaving his post this month to return to Harvard Law School, officials said Friday.

Mr. Sunstein has long been an advocate of behavorial economics in setting policy, the notion that people will respond to incentives, and has argued for restraint in government regulations. As such, he was met with skepticism and opposition by some liberals when he was chosen at the start of the Obama administration.

As administrator of the Office of Information and Regulatory Affairs in the Office of Management and Budget, his formal title, Mr. Sunstein led an effort to look back at existing regulations with an eye toward killing those that are no longer needed or cost effective. The White House estimates that effort has already produced $10 billion in savings over five years, with more to come.

“Cass has shown that it is possible to support economic growth without sacrificing health, safety and the environment,” President Barack Obama said in a statement. He said these reforms and “his tenacious promotion of cost-benefit analysis,” will “benefit Americans for years to come.”

Even so, conservatives point to sweeping new regulations for the financial sector and health care in arguing that the administration has increased the regulatory burden on businesses.

Mr. Sunstein will depart this month for Harvard, where he will rejoin the law school faculty as the Felix Frankfurter Professor of Law and Director of the Program on Behavioral Economics and Public Policy.

It will be interesting to hear, once Professor Sunstein returns to an academic setting, his views on whether and in what instances — aside from the CFPB — behavioral economics actually had much impact on the formation of regulatory policy within the Administration.

Given the enthusiasm for application of behavioral economics to antitrust analysis from some corners of the Commission and the academy, I found this remark from Alison Oldale at the Federal Trade Commission interesting (Antitrust Source):

Behavioral economists are clearly correct in saying that people and firms are not the perfect decision makers using perfect information that they are portrayed to be in many economic models. But alternative models that incorporate better assumptions about behavior and which give us useful ways to understand the likely effects of mergers, or particular types of conduct, aren’t there yet. And in the meantime our existing models give us workable approximations. So we haven’t done much yet, but we’ll keep watching developments.

For myself, I wonder whether the first place behavioral economic analysis might be brought to bear on antitrust enforcement will be in areas like coordinated effects or exchange of information. These are areas where our existing theories are not very helpful. For example when looking at coordinated effects in merger control the standard approach focuses a lot on incentives to coordinate. But there are lots and lots of markets where firms have an incentive to coordinate but they don’t seem to be doing so. So it seems there is a big piece of the puzzle that we are missing, and perhaps behavioral economics will be able to tell us something about what to look at in order to get a better handle when coordination is likely in practice.

I certainly agree with the conclusion that the behavioral economics models are not yet ready for primetime.  See, for example, my work with Judd Stone in Misbehavioral Economics: The Case Against Behavioral Antitrust or my series of posts on “Nudging Antitrust” (here and here).

Yale Law Journal has published my article on “The Antitrust/ Consumer Protection Paradox: Two Policies At War With One Another.”  The hat tip to Robert Bork’s classic “Antitrust Paradox” in the title will be apparent to many readers.  The primary purpose of the article is to identify an emerging and serious conflict between antitrust and consumer protection law arising out of a sharp divergence in the economic approaches embedded within antitrust law with its deep attachment to rational choice economics on the one hand, and the new behavioral economics approach of the Consumer Financial Protection Bureau.  This intellectual rift brings with it serious – and detrimental – consumer welfare consequences.  After identifying the causes and consequences of that emerging rift, I explore the economic, legal, and political forces supporting the rift.

Here is the abstract:

The potential complementarities between antitrust and consumer protection law— collectively, “consumer law”—are well known. The rise of the newly established Consumer Financial Protection Bureau (CFPB) portends a deep rift in the intellectual infrastructure of consumer law that threatens the consumer-welfare oriented development of both bodies of law. This Feature describes the emerging paradox that rift has created: a body of consumer law at war with itself. The CFPB’s behavioral approach to consumer protection rejects revealed preference— the core economic link between consumer choice and economic welfare and the fundamental building block of the rational choice approach underlying antitrust law. This Feature analyzes the economic, legal, and political institutions underlying the potential rise of an incoherent consumer law and concludes that, unfortunately, there are several reasons to believe the intellectual rift shaping the development of antitrust and consumer protection will continue for some time.

Go read the whole thing.

The FTC is having a conference in the economics of drip pricing:

Drip pricing is a pricing technique in which firms advertise only part of a product’s price and reveal other charges later as the customer goes through the buying process. The additional charges can be mandatory charges, such as hotel resort fees, or fees for optional upgrades and add-ons. Drip pricing is used by many types of firms, including internet sellers, automobile dealers, financial institutions, and rental car companies.

Economists and marketing academics will be brought together to examine the theoretical motivation for drip pricing and its impact on consumers, empirical studies, and policy issues pertaining to drip pricing. The sessions will address the following questions: Why do firms engage in drip pricing? How does drip pricing affect consumer search? Where does drip pricing occur? When is drip pricing harmful? Are there efficiency justifications for the practice in some situations? Can competition prevent firms from harming consumers through drip pricing? Can consumer experience or firm reputation limit harm from drip pricing? What types of policies could lead to improved consumer decision making and under what circumstances should such policies be applied?

The workshop, which will be free and open to the public, will be held at the FTC’s Conference Center, located at 601 New Jersey Avenue, N.W., Washington, DC. A government-issued photo ID is required for entry. Pre-registration for this workshop is not necessary, but is encouraged, so that we may better plan for the event.

Here is the conference agenda:

8:30 a.m.   Registration
   
9:00 a.m. Welcome and Opening Remarks
Jon Leibowitz, Chairman, Federal Trade Commission    
   
9:05 a.m. Overview of Drip Pricing
Mary Sullivan, Federal Trade Commission  
   
9:15 a.m. Consumer and Competitive Effects of Obscure Pricing
Joseph Farrell, Director, Bureau of Economics, Federal Trade Commission
   
9:45 a.m.  Theories of Drip Pricing
Chair, Doug Smith, Federal Trade Commission
   
[Presentation] David Laibson, Harvard University
[Presentation] Michael Baye, Indiana University
[Presentation] Michael Waldman, Cornell University
   
[Comments] Discussion leader
Michael Salinger, Boston University
   
11:15 a.m.  Morning Break
   
11:30 a.m.  Keynote Address
Amelia Fletcher, Chief Economist, Office of Fair Trading, UK
   
12:00 p.m Lunch
   
1:00 p.m. Empirical Analysis of Drip Pricing
Chair, Erez Yoeli, Federal Trade Commission
   
[Presentation]
Vicki Morwitz, New York University
[Presentation]
Meghan Busse, Northwestern University
[Presentation]
Sara Fisher Ellison, Massachusetts Institute of Technology
   
[Comments] Discussion leader
Jonathan Zinman, Dartmouth College
   
2:30 p.m. Afternoon Break
   
2:45 p.m. Public Policy Roundtable
   
  Moderator, Mary Sullivan, Federal Trade Commission
 
  Panelists

Michael Baye, Indiana University

Sara Fisher Ellison, Massachusetts Institute of Technology

Rebecca Hamilton, University of Maryland
  David Laibson, Harvard University
  Vicki Morwitz, New York University
  Michael Salinger, Boston University
  Michael Waldman, Cornell University
  Florian Zettelmeyer, Northwestern University
  Jonathan Zinman, Dartmouth College
   
3:45 p.m.  Closing Remarks

I’ve posted to SSRN an article written for the Antitrust Law Journal symposium on the Neo-Chicago School of Antitrust.  The article is entitled “Abandoning Chicago’s Antitrust Obsession: The Case for Evidence-Based Antitrust,” and focuses upon what I believe to be a central obstacle to the continued evolution of sensible antitrust rules in the courts and agencies: the dramatic proliferation of economic theories which could be used to explain antitrust-relevant business conduct. That proliferation has given rise to a need for a commitment to develop sensible criteria for selecting among these theories; a commitment not present in modern antitrust institutions.  I refer to this as the “model selection problem,” describe how reliance upon shorthand labels and descriptions of the various “Chicago Schools” have distracted from the development of solutions to this problem, and raise a number of promising approaches to embedding a more serious commitment to empirical testing within modern antitrust.

Here is the abstract.

The antitrust community retains something of an inconsistent attitude towards evidence-based antitrust.  Commentators, judges, and scholars remain supportive of evidence-based antitrust, even vocally so; nevertheless, antitrust scholarship and policy discourse continues to press forward advocating the use of one theory over another as applied in a specific case, or one school over another with respect to the class of models that should inform the structure of antitrust’s rules and presumptions, without tethering those questions to an empirical benchmark.  This is a fundamental challenge facing modern antitrust institutions, one that I call the “model selection problem.”  The three goals of this article are to describe the model selection problem, to demonstrate that the intense focus upon so-called schools within the antitrust community has exacerbated the problem, and to offer a modest proposal to help solve the model selection problem.  This proposal has two major components: abandonment of terms like “Chicago School,” “Neo-Chicago School,” and “Post-Chicago School,” and replacement of those terms with a commitment to testing economic theories with economic knowledge and empirical data to support those theories with the best predictive power.  I call this approach “evidence-based antitrust.”  I conclude by discussing several promising approaches to embedding an appreciation for empirical testing more deeply within antitrust institutions.

I would refer interested readers to the work of my colleagues Tim Muris and Bruce Kobayashi (also prepared for the Antitrust L.J. symposium) Chicago, Post-Chicago, and Beyond: Time to Let Go of the 20th Century, which also focuses upon similar themes.

WSJ has an interesting story about the growing number of employer efforts to import “game” like competitions in the workplace to provide incentives for employees to engage in various healthy activities.  Some of these ideas sound in the behavioral economics literature, e.g. choice architecture or otherwise harnessing the power of non-standard preferences with a variety of nudges; others are just straightforward applications of providing incentives to engage in a desired activity.

A growing number of workplace programs are borrowing techniques from digital games in an effort to encourage regular exercise and foster healthy eating habits. The idea is that competitive drive—sparked by online leader boards, peer pressure, digital rewards and real-world prizes—can get people to improve their overall health.

A survey of employers released in March by the consulting firm Towers Watson and the National Business Group on Health found that about 9% expected to use online games in their wellness programs by the end of this year, with another 7% planning to add them in 2013. By the end of next year, 60% said their health initiatives would include online games as well as other types of competitions between business locations or employee groups.

How well do these programs work in practice?  The story reports mixed evidence of the efficacy of the various game-style competitions; this is not too surprising given the complexity of individual incentives within organizations and teams.

Researchers say using videogame-style techniques to motivate people has grounding in psychological studies and behavioral economics. But, they say, the current data backing the effectiveness of workplace “gamification” wellness programs is thin, though companies including WellPoint Inc. and ShapeUp Inc. have early evidence of weight loss and other improvements in some tests.

So far, “there’s not a lot of peer-reviewed evidence that it achieves sustained improvements in health behavior and health outcomes,” says Kevin Volpp, director of the University of Pennsylvania’s Center for Health Incentives and Behavioral Economics.

Moreover, some employees may feel unwanted pressure from colleague-teammates or bosses when workplace competitions become heated, though participation is typically voluntary.

Incentives are powerful; but when and how they matter depends upon institutions.  Gneezy et al have an excellent survey of the literature in the Journal of Economic Perspectives, where they conclude:

When explicit incentives seek to change behavior in areas like education, contributions to public goods, and forming habits, a potential conflict arises between
the direct extrinsic effect of the incentives and how these incentives can crowd out intrinsic motivations in the short run and the long run. In education, such incentives seem to have moderate success when the incentives are well-specifified and well-targeted (“read these books” rather than “read books”), although the jury is still out regarding the long-term success of these incentive programs. In encouraging contributions to public goods, one must be very careful when designing the incentives to prevent adverse changes in social norms, image concerns, or trust. In the emerging literature on the use of incentives for lifestyle changes, large enough incentives clearly work in the short run and even in the middle run, but in the longer run the desired change in habits can again disappear.

HT: Salop.