Archives For ron cass

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]

There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.

But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve. 

Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.

In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.

As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today

Data and information are important tools for mitigating emergencies

Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.

What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.

Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.

In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.

The apparent ventilator availability crisis

As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):

When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.

With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.

Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.

A recent PBS Report describes the current ventilator situation in the US:

A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.

“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.

Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”

The recent report of 9,000 COVID-19-related deaths in Italy brings the ventilator scarcity crisis into stark relief: There is little doubt that a substantial number of these deaths stem from the unavailability of key medical resources, including, most importantly, ventilators.  

Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better. 

Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”

But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used. 

The basic outline of the idea for redistributing these resources is fairly simple: 

  1. First, register all of the US’s existing ventilators in a centralized database. 
  2. Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
  3. Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).   
  4. Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation. 

Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).

I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.

Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.

But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)

But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities

Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.

Some caveats

There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.

In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.     

Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a socially sub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Appropriate standards for emergency preparedness policy generally

Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan

It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):

  1. Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different. 
  2. Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
  3. That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient. 
  4. But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
  5. Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
  6. Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).

Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.   

Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.

A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!

The American concept of “the rule of law” (see here) is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution, and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law (see here).  It holds that the executive must comply with the law because ours is “a government of laws, and not of men,” or, as Justice Anthony Kennedy put it in a 2006 address to the American Bar Association, “that the Law is superior to, and thus binds, the government and all its officials.”  (See here.)  More specifically, and consistent with these broader formulations, the late and great legal philosopher Friedrich Hayek wrote that the rule of law “means the government in all its actions is bound by rules fixed and announced beforehand – rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.”  (See here.)  In other words, as former Boston University Law School Dean Ron Cass put it, the rule of law involves “a system of binding rules” adopted and applied by a valid government authority that embody “clarity, predictability, and equal applicability.”  (See here.)

Regrettably, by engaging in regulatory overreach and ignoring statutory limitations on the scope of their authority, federal administrative agencies have shown scant appreciation for rule of law restraints under the current administration (see here and here for commentaries on this problem by Heritage Foundation scholars).  Although many agencies could be singled out, the Federal Communications Commission’s (FCC) actions in recent years have been especially egregious (see here).

A prime example of regulatory overreach by the FCC that flouted the rule of law was its promulgation in 2015 of an order preempting state laws in Tennessee and North Carolina that prevented municipally-owned broadband providers from providing broadband service beyond their geographic boundaries (Municipal Broadband Order, see here).   As a matter of substance, this decision ignored powerful economic evidence that municipally-provided broadband services often involve wasteful subsidies for financially–troubled government-owned providers that interfere with effective private sector competition and are economically harmful (my analysis is here).   As a legal matter, the Municipal Broadband Order went beyond the FCC’s statutory authority and raises grave constitutional problems, thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law (see here).  The Order lacked a sound legal footing in basing its authority on Section 706 of the Telecommunications Act of 1996, which merely authorizes the FCC to promote local broadband competition and investment (a goal which the Order did not advance) and says nothing about preemption.   In addition, the FCC’s invocation of preemption authority trenched upon the power of the states to control their subordinate governmental entities, guaranteed to them by the Constitution as an essential element of their sovereignty in our federal system (see here).   What’s more, the Chattanooga, Tennessee and Wilson, North Carolina municipal broadband systems that had requested FCC preemption imposed content-based restrictions on users of their network that raised serious First Amendment issues (see here).   Specifically, those systems’ bans on the transmittal of various sorts of “abusive” language appeared to be too broad to withstand First Amendment “strict scrutiny.”  Moreover, by requiring prospective broadband enrollees to agree not to sue their provider as an initial condition of service, two of the municipal systems arguably unconstitutionally coerced users to forgo exercise of their First Amendment rights.

Fortunately, on August 10, 2016, in Tennessee v. FCC, the U.S. Court of Appeals for the Sixth Circuit struck down the Municipal Broadband Order, pithily stating:

The FCC order essentially serves to re-allocate decision-making power between the states and their municipalities. This is shown by the fact that no federal statute or FCC regulation requires the municipalities to expand or otherwise to act in contravention of the preempted state statutory provisions. This preemption by the FCC of the allocation of power between a state and its subdivisions requires at least a clear statement in the authorizing federal legislation. The FCC relies upon § 706 of the Telecommunications Act of 1996 for the authority to preempt in this case, but that statute falls far short of such a clear statement. The preemption order must accordingly be reversed.

The Sixth Circuit’s decision has important policy ramifications that extend beyond the immediate controversy, as Free State Foundation Scholars Randolph May and Seth Cooper explain:

The FCC’s Municipal Broadband Preemption Order would have turned constitutional federalism inside out by severing local political subdivisions’ accountability from the states governments that created them. Had the agency’s order been upheld, the FCC surely would have preempted several other state laws restricting municipalities’ ownership and operation of broadband networks. Several state governments would have been locked into an unwise policy of favoring municipal broadband business ventures with a track record of legal and proprietary conflicts of interest, expensive financial failures, and burdensome debts for local taxpayers.

The avoidance of a series of bad side effects in a corner of the regulatory world is not, however, sufficient grounds for breaking out the champagne.  From a global perspective, the Sixth Circuit’s Tennessee v. FCC decision, while helpful, does not address the broader problem of agency disregard for the limitations of constitutional federalism and the rule of law.  Administrative overreach, like a chronic debilitating virus, saps the initiative of the private sector (and, more generally, the body politic) and undermines its vitality.  In addition, not all federal judges can be counted on to rein in legally unjustified rules (which in any event impose costly delay and uncertainty, even if they are eventually overturned).  What is needed is an administration that emphasizes by word and deed that it is committed to constitutionalist rule of law principles – and insists that its appointees (including commissioners of independent agencies) share that philosophy.  Let us hope that we do not have to wait too long for such an administration.

Antitrust & Competition Policy Blog is hosting a symposium on Competition in Agriculture.  Mike’s post from yesterday is available here.   So far in the symposium there are also posts by Ron Cass (BU Law), Jeff Harrison (Florida Law), Peter Carstensen (Wisconsin Law), and Kyle Stiegert (Wisconsin Applied Econ).  Additional posts should be forthcoming from Christina Bohannan (Iowa Law), Andrew Novakovic (Cornell Applied Economics), and the great George Priest (Yale Law), who I hope gets the blogging bug.

Josh, Scott Kieff and I have posted a short comment based on our submission to the DOJ/USDA Workshops on Agricultural Competition, co-authored by us and Mike. The comment should be available for download from the DOJ webpage when the public comments are posted (someday . . . ).  A copy is also available here (www.laweconcenter.org), and comments are most welcome at gmanne@laweconcenter.org Please leave comments on this post over at the A&CP Blog.

Regarding firm size and integration, it must be kept in mind that the agriculture industry in the U.S. has, for good reasons, moved beyond the historic, pastoral image of small family farms operating in quiet isolation, devoid of big business and modern technologies. The genetic traits that give modern seeds their value—traits that confer resistance to herbicide and high yields, for example—are often developed through processes that are technologically-advanced, time- and money-intensive, risky investments, and subject to various layers of regulation. It doesn’t take expertise in industrial organization to imagine why at least for some participants in this market these processes are likely to be more efficiently and effectively conducted within large agribusiness companies having enormous research and development budgets and significant expertise in managing complex business and legal operations, than they are by the somber couple depicted in the famous 1930 Grant Wood painting, “American Gothic.” Nor is such expertise required to imagine why complex contracting across firms, of any size, is likely to be of significant help in supporting the specialization and division of labor that is useful in allowing some businesses (even a small family farm is a business) to be good at planting and harvesting while others are good at inventing, investing, managing, developing, testing, manufacturing, marketing, and distributing the next wave of innovative crop technologies. This requires on the one hand that the government give reliable enforcement to contracts and property rights whether tangible or intangible (extremely important in this industry are patents, trade secrets, and even trademarks), while on the other hand it allows firms wide flexibility to decide for themselves which of these contracts and property rights they would like to enter into or obtain pursuant to the applicable bodies of contract and property law.

When courts and regulatory agencies like the DOJ Antitrust Division adopt special approaches to the body of antitrust law to address concerns that may arise from these property rights and contracts, they run the risk of crafting doctrines that inappropriately override well-established bodies of law that are informed by longstanding judicial and scholarly thought and consideration of each area, and creating the potential to reduce innovation and economic growth. A central countervailing concern is that the putative antitrust injuries that might arise are rooted in stylized economic models that are heavily dependent on a narrow set of assumptions, leaving significant room for erroneous antitrust enforcement. A modest but fundamental safeguard to protect against this concern of “false positives,” is an approach to antitrust that requires a strong demonstration of actual anticompetitive effect as a precondition for a monopolization violation.

Not only are patents not presumptive proof of market power in any static sense, but patents can also meaningfully improve both competition and access to patented technologies over time, in the dynamic sense. From the public record it appears that the driver of much of today’s antitrust enforcement in the agricultural industry boils down to intervention into business disputes between large and sophisticated parties. The inherent uncertainty regarding the economic consequences of specific conduct, coupled with competitors’ poor incentives and the huge costs of error, counsel strongly against antitrust intervention without strong empirical evidence that the conduct has reduced competition and harmed consumers in the form of higher prices, lower quality, or reduced innovation.

Antitrust & Competition Policy Blog is hosting a symposium on Competition in Agriculture. So far today, there are posts by Ron Cass (BU Law), Jeff Harrison (U of Florida Law), and me.  Additional posts should be forthcoming from Christina Bohannan (U. Iowa Law), Scott Kieff (GW Law), Andrew Novakovic (Cornell Applied Economics), George Priest (Yale Law), Kyle Stiegert (U. Wisconsin Agricultural and Applied Economics), and Josh Wright (George Mason Law). My contribution is reproduced below.  Please leave comments over at the A&CP Blog.

Learn from history, don’t repeat it.

Antitrust laws originated in Midwest states like Missouri in the late 1880s when small farmers banded together in the face of falling agricultural commodity prices to stand against the competitive pressures of larger, more efficient farming operations. Over a century later, it is, as Yogi Berra said, “déjà vu all over again.”

Of the almost 2.2 million farms in the USDA’s 2008 Agricultural Resource Management Survey, the 1.8 million smallest farms lost money on their farming operations (on average) even after accounting for government program payments. These farms represent only 10% of the value of agricultural production in the US, yet received roughly 28% of government payments.

In addition, these small-scale farmers are less likely than their larger competitors to shop beyond the nearest town for key inputs, to shop for the best price from suppliers, to negotiate price discounts, or to lock in prices for inputs. Small-scale farmers are also much less likely to market their products using contracts or to use market-based risk management tools. In short, small-scale farmers fail to (or are simply unable to) take advantage of market opportunities that larger, more efficient farms do. That large farms do engage in these activities suggest a very competitive agricultural economy.

Although antitrust has long been used as an anticompetitive club by economically inefficient competitors, such applications do more harm than good. The agriculture sector would be better served by eliminating the subsidies that sustain marginal producers than by using antitrust to penalize more efficient, better managed farming operations and other firms along the rest of the food value chain. DOJ’s antitrust inquiry will, at best, simply perpetuate the inefficient industry fringe or, more likely, inhibit the kinds of technological and market innovations that have provided US consumers and the world with a safe, reliable food supply.

As you may know, this past Friday we (Geoff and Josh) organized the inaugural GMU/Microsoft Conference on the Law and Economics of Innovation. Overall, we were extremely pleased with our first entry in this conference series, The Regulation of Innovation and Economic Growth. We had about 130 register for the conference, including many high level FTC and DOJ officials, academics, and industry representatives. In the end we had about 95 attendees. We also hosted a dinner for about 45 Washington VIPs (several FTC folks, a federal judge, prominent attorneys, representatives from USTR and Commerce, etc.) the evening before at Citronelle. A good time and good conversation were had by all.

The conference started off on the right foot with an opening address from Bob Cooter (Berkeley Law) which pointed to institutional and legal solutions to the “double trust problem” in innovation as a primary factor in unleashing entrepreneurial forces in countries facing high levels of poverty and stagnant growth. The basic point was that various institutions—including importantly IP laws—serve to facilitate the essential melding of ideas and capital necessary to promote innovation and to encourage economic growth. The talk was derived from a book Cooter is currently writing (with Hans-Bernd Schaefer), two draft chapters of which are available here.

The three subsequent panels discussed the innovative process & bundling in technology markets, IP Reform, and Antitrust Regulation of Innovation. The papers are available here. We both took notes on the presentations and share some reflections on the papers and discussions below the fold.

Continue Reading…