Archives For Jurisdictional competition

A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.

Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.

Discussion

As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.

Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).

MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”

Nevertheless, MMR rules would be subject to two substantial sources of legal risk.

The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).

An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.

The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.

Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.

Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.

Conclusion

At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.

But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ramaz Samrout, (Principal, REIM Strategies; Lay Member, Competition Tribunal of Canada)]

At a time when nations are engaged in bidding wars in the worldwide market to alleviate the shortages of critical medical necessities for the Covid-19 crisis, it certainly bares the question, have free trade and competition policies resulting in efficient global integrated market networks gone too far? Did economists and policy makers advocating for efficient competitive markets not foresee a failure of the supply chain in meeting a surge in demand during an inevitable global crisis such as this one?

The failures in securing medical supplies have escalated a global health crisis to geopolitical spats fuelled by strong nationalistic public sentiments. In the process of competing to acquire highly treasured medical equipment, governments are confiscating, outbidding, and diverting shipments at the risk of not adhering to the terms of established free trade agreements and international trading rules, all at the cost of the humanitarian needs of other nations.

Since the start of the Covid-19 crisis, all levels of government in Canada have been working on diversifying the supply chain for critical equipment both domestically and internationally. But, most importantly, these governments are bolstering domestic production and an integrated domestic supply network recognizing the increasing likelihood of tightening borders impacting the movement of critical products.

For the past 3 weeks in his daily briefings, Canada’s Prime Minister, Justin Trudeau, has repeatedly confirmed the Government’s support of domestic enterprises that are switching their manufacturing lines to produce critical medical supplies and of other “made in Canada” products.

As conditions worsen in the US and the White House hardens its position towards collaboration and sharing for the greater global humanitarian good—even in the presence of a recent bilateral agreement to keep the movement of essential goods fluid—Canada’s response has become more retaliatory. Now shifting to a message emphasizing that the need for “made in Canada” products is one of extreme urgency.

On April 3rd, President Trump ordered Minnesota-based 3M to stop exporting medical-grade masks to Canada and Latin America; a decision that was enabled by the triggering of the 1950 Defence Production Act. In response, Ontario Premier, Doug Ford, stated in his public address:

Never again in the history of Canada should we ever be beholden to companies around the world for the safety and wellbeing of the people of Canada. There is nothing we can’t build right here in Ontario. As we get these companies round up and we get through this, we can’t be going over to other sources because we’re going to save a nickel.

Premier Ford’s words ring true for many Canadians as they watch this crisis unfold and wonder where would it stop if the crisis worsens? Will our neighbour to the south block shipments of a Covid-19 vaccine when it is developed? Will it extend to other essential goods such as food or medicine? 

There are reports that the decline in the number of foreign workers in farming caused by travel restrictions and quarantine rules in both Canada and the US will cause food production shortages, which makes the actions of the White House very unsettling for Canadians.  Canada’s exports to the US constitute 75% of total Canadian exports, while imports from the US constitute 46%. Canada’s imports of food and beverages from the US were valued at US $24 billion in 2018 including: prepared foods, fresh vegetables, fresh fruits, other snack foods, and non-alcoholic beverages.

The length and depth of the crisis will determine to what extent the US and Canadian markets will experience shortages in products. For Canada, the severity of the pandemic in the US could result in further restrictions on the border. And it is becoming progressively more likely that it will also result in a significant reduction in the volume of necessities crossing the border between the two nations.

Increasingly, the depth and pain experienced from shortages in necessities will shape public sentiment towards free trade and strengthen mainstream demands of more nationalistic and protectionist policies. This will result in more pressure on political and government establishments to take action.

The reliance on free trade and competition policies favouring highly integrated supply chain networks is showing cracks in meeting national interests in this time of crisis. This goes well beyond the usual economic factors of contention between countries of domestic employment, job loss and resource allocation. The need for correction, however, risks moving the pendulum too far to the side of protectionism.

Free trade setbacks and global integration disruptions would become the new economic reality to ensure that domestic self-sufficiency comes first. A new trade trend has been set in motion and there is no going back from some level of disintegrating globalised supply chain productions.

How would domestic self-sufficiency be achieved? 

Would international conglomerates build local plants and forgo their profit maximizing strategies of producing in growing economies that offer cheap wages and resources in order to avoid increased protectionism?

Will the Canada-United States-Mexico Agreement (CUSMA) known as the NEW NAFTA, which until today has not been put into effect, be renegotiated to allow for production measures for securing domestic necessities in the form of higher tariffs, trade quotas, and state subsidies?

Are advanced capitalist economies willing to create State-Owned Industries to produce domestic products for what it deems necessities?

Many other trade policy variations and options focused on protectionism are possible which could lead to the creation of domestic monopolies. Furthermore, any return to protected national production networks will reduce consumer welfare and eventually impede technological advancements that result from competition. 

Divergence between free trade agreements and competition policy in a new era of protectionism.

For the past 30 years, national competition laws and policies have increasingly become an integrated part of free trade agreements, albeit in the form of soft competition law language, making references to the parties’ respective competition laws, and the need for transparency, procedural fairness in enforcement, and cooperation.

Similarly, free trade objectives and frameworks have become part of the design and implementation of competition legislation and, subsequently, case law. Both of which are intended to encourage competitive market systems and efficiency, an implied by-product of open markets.

In that regard, the competition legal framework in Canada, the Competition Act, seeks to maintain and strengthen competitive market forces by encouraging maximum efficiency in the use of economic resources. Provisions to determine the level of competitiveness in the market consider barriers to entry, among them, tariff and non-tariff barriers to international trade. These provisions further direct adjudicators to examine free trade agreements currently in force and their role in facilitating the current or future possibility of an international incumbent entering the market to preserve or increase competition. And it goes further to also assess the extent of an increase in the real value of exports, or substitution of domestic products for imported products.

It is evident in the design of free trade agreements and competition legislation that efficiency, competition in price, and diversification of products is to be achieved by access to imported goods and by encouraging the creation of global competitive suppliers.

Therefore, the re-emergence of protectionist nationalistic measures in international trade will result in a divergence between competition laws and free trade agreements. Such setbacks would leave competition enforcers, administrators, and adjudicators grappling with the conflict between the economic principles set out in competition law and the policy objectives that could be stipulated in future trade agreements. 

The challenge ahead facing governments and industries is how to correct for the cracks in the current globalized competitive supply networks that have been revealed during this crisis without falling into a trap of nationalism and protectionism.

The cause of basing regulation on evidence-based empirical science (rather than mere negative publicity) – and of preventing regulatory interference with First Amendment commercial speech rights – got a judicial boost on February 26.

Specifically, in National Association of Wheat Growers et al. v. Zeise (Monsanto Case), a California federal district court judge preliminarily enjoined application against Monsanto of a labeling requirement imposed by a California regulatory law, Proposition 65.  Proposition 65 mandates that the Governor of California publish a list of chemicals known to the State to cause cancer, and also prohibits any person in the course of doing business from knowingly and intentionally exposing anyone to the listed chemicals without a prior “clear and reasonable” warning.  In this case, California sought to make Monsanto place warning labels on its popular Roundup weed killer products, stating that glyphosate, a widely-used herbicide and key Roundup ingredient, was known to cause cancer.  Monsanto, joined by various agribusiness entities, sued to enjoin California from taking that action.  Judge William Shubb concluded that there was insufficient evidence that the active ingredient in Roundup causes cancer, and that requiring Roundup to publish warning labels would violate Monsanto’s First Amendment rights by compelling it to engage in false and misleading speech.  Salient excerpts from Judge Shubb’s opinion are set forth below:

[When, as here, it compels commercial speech, in order to satisfy the First Amendment,] [t]he State has the burden of demonstrating that a disclosure requirement is purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest. . . .  The dispute in the present case is over whether the compelled disclosure is of purely factual and uncontroversial information. In this context, “uncontroversial” “refers to the factual accuracy of the compelled disclosure, not to its subjective impact on the audience.” [citation omitted]

 On the evidence before the court, the required warning for glyphosate does not appear to be factually accurate and uncontroversial because it conveys the message that glyphosate’s carcinogenicity is an undisputed fact, when almost all other regulators have concluded that there is insufficient evidence that glyphosate causes cancer. . . .

It is inherently misleading for a warning to state that a chemical is known to the state of California to cause cancer based on the finding of one organization [, the International Agency for Research on Cancer] (which as noted above, only found that substance is probably carcinogenic), when apparently all other regulatory and governmental bodies have found the opposite, including the EPA, which is one of the bodies California law expressly relies on in determining whether a chemical causes cancer. . . .  [H]ere, given the heavy weight of evidence in the record that glyphosate is not in fact known to cause cancer, the required warning is factually inaccurate and controversial. . . .

The court’s First Amendment inquiry here boils down to what the state of California can compel businesses to say. Whether Proposition 65’s statutory and regulatory scheme is good policy is not at issue. However, where California seeks to compel businesses to provide cancer warnings, the warnings must be factually accurate and not misleading. As applied to glyphosate, the required warnings are false and misleading. . . .

As plaintiffs have shown that they are likely to succeed on the merits of their First Amendment claim, are likely to suffer irreparable harm absent an injunction, and that the balance of equities and public interest favor an injunction, the court will grant plaintiffs’ request to enjoin Proposition 65’s warning requirement for glyphosate.

The Monsanto Case commendably highlights a little-appreciated threat of government overregulatory zeal.  Not only may excessive regulation fail a cost-benefit test, and undermine private property rights, it may violates the First Amendment speech rights of private actors when it compels inaccurate speech.  The negative economic consequences may be substantial  when the government-mandated speech involves a claim about a technical topic that not only lacks empirical support (and thus may be characterized as “junk science”), but is deceptive and misleading (if not demonstrably false).  Deceptive and misleading speech in the commercial market place reduces marketplace efficiency and reduces social welfare (both consumer’s surplus and producer’s surplus).  In particular, it does this by deterring mutually beneficial transactions (for example, purchases of Roundup that would occur absent misleading labeling about cancer risks), generating suboptimal transactions (for example, purchases of inferior substitutes to Roundup due to misleading Roundup labeling), and distorting competition within the marketplace (the reallocation of market shares among Roundup and substitutes not subject to labeling).  The short-term static effects of such market distortions may be dwarfed by the  dynamic effects, such as firms’ disincentives to invest in innovation (or even participate) in markets subject to inaccurate information concerning the firms’ products or services.

In short, the Monsanto Case highlights the fact that government regulation not only imposes an implicit tax on business – it affirmatively distorts the workings of individual markets if it causes the introduction misleading or deceptive information that is material to marketplace decision-making.  The threat of such distortive regulation may be substantial, especially in areas where regulators interact with “public interest clients” that have an incentive to demonize disfavored activities by private commercial actors – one example being the health and safety regulation of agricultural chemicals.  In those areas, there may be a case for federal preemption of state regulation, and for particularly close supervision of federal agencies to avoid economically inappropriate commercial speech mandates.  Stay tuned for future discussion of such potential legal reforms.

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

The FCC doesn’t have authority over the edge and doesn’t want authority over the edge. Well, that is until it finds itself with no choice but to regulate the edge as a result of its own policies. As the FCC begins to explore its new authority to regulate privacy under the Open Internet Order (“OIO”), for instance, it will run up against policy conflicts and inconsistencies that will make it increasingly hard to justify forbearance from regulating edge providers.

Take for example the recently announced NPRM titled “Expanding Consumers’ Video Navigation Choices” — a proposal that seeks to force cable companies to provide video programming to third party set-top box manufacturers. Under the proposed rules, MVPD distributors would be required to expose three data streams to competitors: (1) listing information about what is available to particular customers; (2) the rights associated with accessing such content; and (3) the actual video content. As Geoff Manne has aptly noted, this seems to be much more of an effort to eliminate the “nightmare” of “too many remote controls” than it is to actually expand consumer choice in a market that is essentially drowning in consumer choice. But of course even so innocuous a goal—which is probably more about picking on cable companies because… “eww cable companies”—suggests some very important questions.

First, the market for video on cable systems is governed by a highly interdependent web of contracts that assures to a wide variety of parties that their bargained-for rights are respected. Among other things, channels negotiate for particular placements and channel numbers in a cable system’s lineup, IP rights holders bargain for content to be made available only at certain times and at certain locations, and advertisers pay for their ads to be inserted into channel streams and broadcasts.

Moreover, to a large extent, the content industry develops its content based on a stable regime of bargained-for contractual terms with cable distribution networks (among others). Disrupting the ability of cable companies to control access to their video streams will undoubtedly alter the underlying assumptions upon which IP companies rely when planning and investing in content development. And, of course, the physical networks and their related equipment have been engineered around the current cable-access regimes. Some non-trivial amount of re-engineering will have to take place to make the cable-networks compatible with a more “open” set-top box market.

The FCC nods to these concerns in its NPRM, when it notes that its “goal is to preserve the contractual arrangements between programmers and MVPDs, while creating additional opportunities for programmers[.]” But this aspiration is not clearly given effect in the NPRM, and, as noted, some contractual arrangements are simply inconsistent with the NPRM’s approach.

Second, the FCC proposes to bind third-party manufacturers to the public interest privacy commitments in §§ 629, 551 and 338(i) of the Communications Act (“Act”) through a self-certification process. MVPDs would be required to pass the three data streams to third-party providers only once such a certification is received. To the extent that these sections, enforced via self-certification, do not sufficiently curtail third-parties’ undesirable behavior, the FCC appears to believe that “the strictest state regulatory regime[s]” and the “European Union privacy regulations” will serve as the necessary regulatory gap fillers.

This seems hard to believe, however, particularly given the recently announced privacy and cybersecurity NPRM, through which the FCC will adopt rules detailing the agency’s new authority (under the OIO) to regulate privacy at the ISP level. Largely, these rules will grow out of §§ 222 and 201 of the Act, which the FCC in Terracom interpreted together to be a general grant of privacy and cybersecurity authority.

I’m apprehensive of the asserted scope of the FCC’s power over privacy — let alone cybersecurity — under §§ 222 and 201. In truth, the FCC makes an admirable showing in Terracom of demonstrating its reasoning; it does a far better job than the FTC in similar enforcement actions. But there remains a problem. The FTC’s authority is fundamentally cabined by the limitations contained within the FTC Act (even if it frequently chooses to ignore them, they are there and are theoretically a protection against overreach).

But the FCC’s enforcement decisions are restrained (if at all) by a vague “public interest” mandate, and a claim that it will enforce these privacy principles on a case-by-case basis. Thus, the FCC’s proposed regime is inherently one based on vast agency discretion. As in many other contexts, enforcers with wide discretion and a tremendous power to penalize exert a chilling effect on innovation and openness, as well as a frightening power over a tremendous swath of the economy. For the FCC to claim anything like an unbounded UDAP authority for itself has got to be outside of the archaic grant of authority from § 201, and is certainly a long stretch for the language of § 706 (a provision of the Act which it used as one of the fundamental justifications for the OIO)— leading very possibly to a bout of Chevron problems under precedent such as King v. Burwell and UARG v. EPA.

And there is a real risk here of, if not hypocrisy, then… deep conflict in the way the FCC will strike out on the set-top box and privacy NPRMs. The Commission has already noted in its NPRM that it will not be able to bind third-party providers of set-top boxes under the same privacy requirements that apply to current MVPD providers. Self-certification will go a certain length, but even there agitation from privacy absolutists will possibly sway the FCC to consider more stringent requirements. For instance, §§ 551 and 338 of the Act — which the FCC focuses on in the set-top box NPRM — are really only about disclosing intended uses of consumer data. And disclosures can come in many forms, including burying them in long terms of service that customers frequently do not read. Such “weak” guarantees of consumer privacy will likely become a frequent source of complaint (and FCC filings) for privacy absolutists.  

Further, many of the new set-top box entrants are going to be current providers of OTT video or devices that redistribute OTT video. And many of these providers make a huge share of their revenue from data mining and selling access to customer data. Which means one of two things: Either the FCC is going to just allow us to live in a world of double standards where these self-certifying entities are permitted significantly more leeway in their uses of consumer data than MVPD providers or, alternatively, the FCC is going to discover that it does in fact need to “do something.” If only there were a creative way to extend the new privacy authority under Title II to these providers of set-top boxes… . Oh! there is: bring edge providers into the regulation fold under the OIO.

It’s interesting that Wheeler’s announcement of the FCC’s privacy NPRM explicitly noted that the rules would not be extended to edge providers. That Wheeler felt the need to be explicit in this suggests that he believes that the FCC has the authority to extend the privacy regulations to edge providers, but that it will merely forbear (for now) from doing so.

If edge providers are swept into the scope of Title II they would be subject to the brand new privacy rules the FCC is proposing. Thus, despite itself (or perhaps not), the FCC may find itself in possession of a much larger authority over some edge providers than any of the pro-Title II folks would have dared admit was possible. And the hook (this time) could be the privacy concerns embedded in the FCC’s ill-advised attempt to “open” the set-top box market.

This is a complicated set of issues, and it’s contingent on a number of moving parts. This week, Chairman Wheeler will be facing an appropriations hearing where I hope he will be asked to unpack his thinking regarding the true extent to which the OIO may in fact be extended to the edge.

I’m very pleased to announce the George Mason Law & Economics Center is hosting a program focusing on our friend and colleague Larry Ribstein’s scholarship on the market for law.   Henry Butler and Bruce Kobayashi have put together a really wonderful program of folks coming together not to celebrate Larry’s work — but to use it as a platform for further discussion and for legal scholars to engage in these important issues.

Interested readers might want to check out the TOTM Unlocking the Law Symposium.

The announcement follows and I hope to see some of you there on Friday, November 9, 2012 at GMU Law.
The Henry G. Manne Program in Law and Regulatory Studies presents Unlocking the Law: Building on the Work of Professor Larry Ribstein to be held at George Mason University School of Law, Friday, November 9th, 2012. The conference will run from 8:00 A.M. to 4:00 P.M.

OVERVIEW: In a series of influential and provocative articles, Professor Larry Ribstein examined the forces behind the recent upheaval in the market for legal services. These forces included increased global competition, changes in the demand for legal services resulting from the expanded role of the in-house counsel, and the expanded use of technology. His analysis showed that changes in the market for legal services were not just the result of a cyclical downturn in the economy. Rather, the profound changes in the market reflected building competitive pressures that exposed the flaws in the business model used by large firms to provide legal services. His recent writings also examined the broader implications of this upheaval for legal education, the private production of law, and whether legal innovation will be hindered by or hasten the demise of the current system of professional regulation of lawyers.

Professor Ribstein passed away suddenly on December 24, 2011. In the wake of the terrible loss of their close friend and colleague, Professors Henry Butler and Bruce Kobayashi (along with several other colleagues at Mason Law) have decided to honor Larry through a conference designed to capture and expand on the spirit of Larry’s recent work. The Unlocking the Law Conference seeks to advance these goals by inviting legal scholars to present their views and engage in a vibrant discussion about the present and future of the market for legal services. The panels at this conference will showcase 14 papers written specifically for this occasion and presented to the public for the first time.

This conference is organized by Henry N. Butler, Executive Director of the Law & Economics Center and George Mason Foundation Professor of Law, and Bruce H. Kobayashi, Professor of Law, George Mason University School of Law through a new Project on Legal Services Reform – under the auspices of the Mason Law & Economics Center. The Project on Legal Services Reform seeks to continue and extend the important work on legal innovation, legal education, law firms, and legal regulation produced by Larry. We hope to encourage scholars who have not worked in these areas to read Larry’s work, critique it in the same manner in which Larry famously commented on papers, and expand (or even restrict or redirect) the thrust of Larry’s work. In essence, this project is about “Larry as Catalyst.”

For background information, you might want to visit TRUTH ON THE MARKET (http://www.truthonthemarket.com), which held an online symposium on this topic on September 19 and 20, 2011.

REGISTRATION: You must pre-register for this event. To register, please send a message with your name, affiliation, and full contact information to: Jeff Smith, Coordinator, Henry G. Manne Program in Law and Regulatory Studies, jsmithQ@gmu.edu

AGENDA:

Friday, November 9, 2012:

Panel I. The Future of Legal Services and Legal Education

How the Structure of Universities Determined the Fate of American Law Schools
– Henry G. Manne, Distinguished Visiting Professor, Ave Maria School of Law; Dean Emeritus, George Mason University School of Law

The Undergraduate Option for Legal Education
– John O. McGinnis, George C. Dix Professor in Constitutional Law, Northwestern University School of Law

Panel II. Deregulating Legal Services

The Deprofessionalization of Profession Services: What Law and Medicine Have in Common and How They Differ
– Richard A. Epstein, Laurence A. Tisch Professor of Law, New York University School of Law

The Future of Licensing Lawyers
– M. Todd Henderson, Professor of Law, University of Chicago Law School

Failing the Legal System: Why Lawyers and Judges Need to Act to Authorize the Organizational Practice of Law
– Gillian K. Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics, University of Southern California Gould School of Law

Globalization and Deregulation of Legal Services
– Nuno Garoupa, Professor and H. Ross and Helen Workman Research Scholar, University of Illinois College of Law; Co-Director, Illinois Program on Law, Behavior, and Social Science

Panel III. Law Firms and Competition Between Lawyers

From Big Law to Lean Law
– William D. Henderson, Professor of Law and Van Nolan Faculty Fellow, Indiana University Maurer School of Law; Director, Center on the Global Legal Profession

Glass Half Full: The Significant Upsides to the Changes in the American Legal Market
– Benjamin H. Barton, Professor of Law, University of Tennessee College of Law

An Exploration of Price Competition Among Lawyers
– Clifford Winston, Senior Fellow, Economics Studies, Brooking Institution

Panel IV. Reputation, Fiduciary Duties, and Agency Costs

Lawyers as Reputational Intermediaries: Sovereign Bond Issuances (1820-2012)
– Michael H. Bradley, F.M. Kirby Professor of Investment Banking Emeritus, Fuqua School of Business, Duke University; Professor of Law, Duke University School of Law
– Mitu Gulati, Professor of Law, Duke University School of Law
– Irving A. De Lira Salvatierra, Graduate Student, Department of Economics, Duke University

The Fiduciary Society
– Jason Scott Johnston, Henry L. and Grace Doherty Charitable Foundation Professor of Law and Nicholas E. Chimicles Research Professor in Business Law and Regulation, University of Virginia School of Law

Class Action Lawmakers and the Agency Problem
– Barry E. Adler, Bernard Petrie Professor of Law and Business and Associate Dean for Information Systems and Technology, New York University School of Law

Panel V. Private Lawmaking and Adjudication

Decentralizing the Lawmaking Function: Should There Be Intellectual Property Rights in Law?
– Robert G. Bone, G. Rollie White Teaching Excellence Chair in Law, University of Texas at Austin School of Law

Arbitration, the Law Market, and the Law of Lawyering
– Erin O’Hara O’Connor, Milton R. Underwood Chair in Law, Vanderbilt University Law School
– Peter B. Rutledge, Herman E. Talmadge Chair of Law, University of Georgia Law School

VENUE:
George Mason University School of Law
3301 Fairfax Drive
Arlington, VA 22201

FURTHER INFORMATION: For more information regarding this conference or other initiatives of the Law & Economics Center, please visit: http://www.MasonLEC.org

Call or send an email to: Tel: (703) 993-8040, Email: lec@gmu.edu

The Henry G. Manne Program in Law & Economics honors the legacy of Henry G. Manne, Dean Emeritus of George Mason Law School and founder of the Law & Economics Center. Manne was a trailblazer in the development of law and economics, not only as a prominent and influential scholar, but also as an academic entrepreneur. He spurred the development of law and economics into the most influential area of legal scholarship through his Economics Institutes for Law Professors and Law Institutes for Economics Professors. The Manne Program promotes law-and-economics scholarship by funding faculty research and hosting research roundtables and academic conferences.

http://www.MasonManne.org

Late last year, with support from the International Center for Law and Economics, I published a paper that empirically analyzed the Philadelphia civil court system. That study focused upon the Philadelphia Complex Litigation Center (PCLC) which handles large mass tort programs including asbestos cases, hormone therapy replacement cases, various prescription drug-related injuries, and other mass tort programs. The PCLC has recently come under criticism for the use of a number of controversial procedures including the consolidation of asbestos cases and the use of reverse-bifurcation methods, where a plaintiff’s damages are calculated prior to the establishment of liability. That paper considered publicly available data from the Administrative Office of Pennsylvania Courts to analyze trends in docketed and pending civil cases in Philadelphia compared to other non-Philadelphia Pennsylvania counties, cases in federal court, and a national sample of state courts.

The study highlighted some unusual trends.  Philadelphia case dockets are disproportionately larger relative to both its population and other state and federal courts.  Philadelphia plaintiffs are also relatively more likely to prefer jury trials and less likely to settle than other non-Philadelphia Pennsylvania plaintiffs.  The data appear to support the conclusion that Philadelphia courts demonstrate a meaningful preference for plaintiffs, by coaxing “business” from other courts and providing them with a unique combination of advantages; indeed, the PCLC’s own stated goals include a desire to “[take] business away from other courts.”   While these strategies have no doubt successfully increased litigation in Philadelphia, and benefit local Philadelphia attorneys, they also bring a substantial cost to Philadelphia businesses and consumers.

I’ve now conducted a preliminary supplemental analysis (available here) designed to test the proposition that the majority of plaintiffs in the PCLC are out-of-state without an apparent or substantive connection to either Philadelphia or even the State of Pennsylvania.  I considered a sample of about 1,400 of the mass-tort cases in the PCLC to determine if the plaintiff filing the case had a home address or had sustained the complained of injury either in Philadelphia or Pennsylvania. Although the findings are preliminary, the results indicate that a substantial fraction of plaintiffs with cases pending at the PCLC have no discernible or relevant connection to Philadelphia or Pennsylvania. This supplement to the original study provides strong evidence that the PCLC has succeeded in attracting a large number of out-of-state cases that comprise a substantial portion of the civil cases in Philadelphia.

The main conclusions of this supplemental analysis are as follows:

  • Of the 1,357 cases in the sample, 913 (67.2%) were brought by plaintiffs who live out-of-state without any apparent connection to Pennsylvania or Philadelphia.
  • Only 180 cases (13.3%) reveal plaintiffs who live in or allege injury in Philadelphia.
  • The most substantial case types where the plaintiffs were overwhelmingly out-of-state are hormone therapy, denture adhesive cream, and Paxil birth defect cases.
  • Although most or all of the companies involved in these cases do business in Philadelphia and a few have some sort of administrative offices there, the vast majority of defendants do not have their principal place of business in Philadelphia or even in Pennsylvania. It is unlikely that venue was moved to the PCLC in most or any of the cases.

A chart summarizing the results is available here at Table 1.

Continue Reading…

The NYT reports:

When he rejected a new European accord on Friday that would bind the continent ever closer, Prime Minister David Cameron seemingly sacrificed Britain’s place in Europe to preserve the pre-eminence of the City, London’s financial district. The question now is whether his stance will someday seem justified, even prescient.

Mr. Cameron refused to go along with the new European plan of stricter fiscal oversight and discipline hammered out in Brussels this week, in great part because of fears that the City would be strangled by regulations emanating from Brussels. * * *

But will it matter?  The article points out that:

  • Non-British banks in the UK will still be subject to EU regulation.
  • European activity might be redirected from the UK to Frankfurt.
  • Europe could prohibit its banks from dealing with UK firms that didn’t adhere to EU regulation.
  • But the UK could exit the EU, reducing the impact of EU regulation in the UK.
  • European banks would still have a powerful incentive to remain global by competing in the UK market. 
  • Even if excluded from Europe, UK banks would still compete powerfully for U.S. and Asian business.  The UK didn’t lose its edge when it stayed with the pound, and likely won’t if it opts out of EU regulation.

And of course there’s the question whether UK hedge funds and other financial institutions will be better global competitors without being saddled by more intrusive European regulation.

Cameron’s move is a reminder that the EU has a double edge:  it promotes competition within the EU, but erects a regulatory cartel for the EU against the rest of the world.  With or without the cartel, it’s still a global economy, governed by the powerful forces of jurisdictional competition.  Federal cartels can slow down that completion but not stop it.

It’s a lesson worth remembering for U.S. securities regulators.

T-R’s Alison Frankel writes (HT Pileggi) about dueling suits in Texas and Delaware challenging the El Paso/Kinder Morgan merger: Three class actions in Texas state court and two class actions and a shareholder derivative suit in Delaware Chancery.

It looks like this merger may bring to a head the “escape from Delaware” phenomenon I discussed a year ago.

I’m off to the International conference on “Regulatory Competition in Contract Law and Dispute Resolution” at Ludwig-Maximilians-University’s Center for Advanced Studies in Munich.  I’m joining an otherwise illustrious group (here’s the program) to present my and Kobayashi’s Law as a Byproduct.

Blogging may be light for the next week (but eating and drinking may be heavy). Tips on what I must see and do in Munich would be appreciated.

Alison Frankel gripes about a NJ judge’s ruling throwing out a shareholders’ derivative suit seeking to hold the J & J board accountable for problems concerning the company’s Rispardal drug. Frankel thinks the bad faith standard the court applied is not high enough.

Ted Frank responds that the fact that the company had settled criminal allegations doesn’t mean the board was irresponsible given big companies’ exposure to prosecutorial overreaching (here’s my thoughts on the problems with prosecutors).  He notes that given huge potential penalties and legal costs “even a risk-neutral set of executives would refuse to go to trial on criminal charges that they had a 95% chance of winning.”  As Ted says:

The issue is this: first, any corporate law is going to have to balance false negatives (valid suits against directors being thrown out prematurely) and false positives (invalid suits against directors costing tens of millions of dollars in time and money to resolve). Any opening up of the courtroom doors to challenge directors will reduce false negatives at the expense of more false positives; any increase in the burden to bring suit will reduce false positives at the expense of more false negatives.

Anyway, Ted continues, shareholders of NJ corporations can decide to invest in firms incorporated elsewhere if they think NJ law is too lenient on directors, aptly citing my and O’Hara’s The Law Market.

Of course Frankel might argue that the business judgment rule that the court used to decide the case is ubiquitous, leaving plaintiffs with little choice. Indeed, the only significant dissent is Nevada which is, if anything, even easier on directors than NJ.   Frankel might also argue that this indicates state corporation law is rigged for managers and that we would do better under federal law.  Perhaps what we need is a super Dodd-Frank/SOX on steroids that preempts state law and exposes managers to suits like the one NJ dismissed.

I would respond that the universal acceptance of the business judgment rule represents the market’s rejection of Frankel’s position.  If Frankel wants to complain that the market for corporate law is imperfect,  she would need to persuade me that shareholders are better off in the clutches of Congress.

My new paper with Erin O’Hara O’Connor has just been posted.  The paper analyzes preemption in light of the theories presented in our book, The Law Market.  I earlier discussed our evolving ideas and their application to the Supreme Court’s recent arbitration and immigration decisions.  Here’s the abstract:

The scope of federal preemption of state law has been plagued by uncertainty and confusion. The courts have applied a set of presumptions on an ad hoc and conflicting basis. Part of the problem is that the courts purport to be interpreting legislative intent while actually making unarticulated substantive policy judgments about the outcome of specific cases. This approach frustrates development of coherent preemption doctrine. Courts should consider a conceptually obvious but as yet unexplored factor in their decisions. Specifically, where Congressional intent is unclear, preemption determinations should consider whether the states have effectively allocated sovereign authority among themselves through choice-of-law rules. Where states have achieved such “horizontal coordination,” Congress often has little need to usurp the states’ role as laboratories for experimenting with potentially diverse substantive laws. Our novel approach preserves both the benefits of local and state sovereignty and Congress’s role of coordinating US laws where necessary. It also provides a coherent policy for guiding preemption decisions where Congressional intent is unclear.

Download it while it’s hot.