Archives For legislation

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Below is the text of my oral testimony to the Senate Commerce, Science and Transportation Committee, the Consumer Protection, Product Safety, and Insurance Subcommittee, at its November 7, 2013 hearing on “Demand Letters and Consumer Protection: Examining Deceptive Practices by Patent Assertion Entities.” Information on the hearing is here, including an archived webcast of the hearing. My much longer and more indepth written testimony is here.

Please note that I am incorrectly identified on the hearing website as speaking on behalf of the Center for the Protection of Intellectual Property (CPIP). In fact, I was invited to testify soley in my personal capacity as a Professor of Law at George Mason University School of Law, given my academic research into the history of the patent system and the role of licensing and commercialization in the distribution of patented innovation. I spoke for neither George Mason University nor CPIP, and thus I am solely responsible for the content of my research and remarks.

Chairman McCaskill, Ranking Member Heller, and Members of the Subcommittee:

Thank you for this opportunity to speak with you today.

There certainly are bad actors, deceptive demand letters, and frivolous litigation in the patent system. The important question, though, is whether there is a systemic problem requiring further systemic revisions to the patent system. There is no answer to this question, and this is the case for three reasons.

Harm to Innovation

First, the calls to rush to enact systemic revisions to the patent system are being made without established evidence there is in fact systemic harm to innovation, let alone any harm to the consumers that Section 5 authorizes the FTC to protect. As the Government Accountability Office found in its August 2013 report on patent litigation, the frequently-cited studies claiming harms are actually “nonrandom and nongeneralizable,” which means they are unscientific and unreliable.

These anecdotal reports and unreliable studies do not prove there is a systemic problem requiring a systemic revision to patent licensing practices.

Of even greater concern is that the many changes to the patent system Congress is considering, incl. extending the FTC’s authority over demand letters, would impose serious costs on real innovators and thus do actual harm to America’s innovation economy and job growth.

From Charles Goodyear and Thomas Edison in the nineteenth century to IBM and Microsoft today, patent licensing has been essential in bringing patented innovation to the marketplace, creating economic growth and a flourishing society.  But expanding FTC authority to regulate requests for licensing royalties under vague evidentiary and legal standards only weakens patents and create costly uncertainty.

This will hamper America’s innovation economy—causing reduced economic growth, lost jobs, and reduced standards of living for everyone, incl. the consumers the FTC is charged to protect.

Existing Tools

Second, the Patent and Trademark Office (PTO) and courts have long had the legal tools to weed out bad patents and punish bad actors, and these tools were massively expanded just two years ago with the enactment of the America Invents Act.

This is important because the real concern with demand letters is that the underlying patents are invalid.

No one denies that owners of valid patents have the right to license their property or to sue infringers, or that patent owners can even make patent licensing their sole business model, as did Charles Goodyear and Elias Howe in the mid-nineteenth century.

There are too many of these tools to discuss in my brief remarks, but to name just a few: recipients of demand letters can sue patent owners in courts through declaratory judgment actions and invalidate bad patents. And the PTO now has four separate programs dedicated solely to weeding out bad patents.

For those who lack the knowledge or resources to access these legal tools, there are now numerous legal clinics, law firms and policy organizations that actively offer assistance.

Again, further systemic changes to the patent system are unwarranted because there are existing legal tools with established legal standards to address the bad actors and their bad patents.

If Congress enacts a law this year, then it should secure full funding for the PTO. Weakening patents and creating more uncertainties in the licensing process is not the solution.

Rhetoric

Lastly, Congress is being driven to revise the patent system on the basis of rhetoric and anecdote instead of objective evidence and reasoned explanations. While there are bad actors in the patent system, terms like PAE or patent troll constantly shift in meaning. These terms have been used to cover anyone who licenses patents, including universities, startups, companies that engage in R&D, and many others.

Classic American innovators in the nineteenth century like Thomas Edison, Charles Goodyear, and Elias Howe would be called PAEs or patent trolls today. In fact, they and other patent owners made royalty demands against thousands of end users.

Congress should exercise restraint when it is being asked to enact systemic legislative or regulatory changes on the basis of pejorative labels that would lead us to condemn or discriminate against classic innovators like Edison who have contributed immensely to America’s innovation economy.

Conclusion

In conclusion, the benefits or costs of patent licensing to the innovation economy is an important empirical and policy question, but systemic changes to the patent system should not be based on rhetoric, anecdotes, invalid studies, and incorrect claims about the historical and economic significance of patent licensing

As former PTO Director David Kappos stated last week in his testimony before the House Judiciary Committee: “we are reworking the greatest innovation engine the world has ever known, almost instantly after it has just been significantly overhauled. If there were ever a case where caution is called for, this is it.”

Thank you.

Earlier this month, Representatives Peter DeFazio and Jason Chaffetz picked up the gauntlet from President Obama’s comments on February 14 at a Google-sponsored Internet Q&A on Google+ that “our efforts at patent reform only went about halfway to where we need to go” and that he would like “to see if we can build some additional consensus on smarter patent laws.” So, Reps. DeFazio and Chaffetz introduced on March 1 the Saving High-tech Innovators from Egregious Legal Disputes (SHIELD) Act, which creates a “losing plaintiff patent-owner pays” litigation system for a single type of patent owner—patent licensing companies that purchase and license patents in the marketplace (and who sue infringers when infringers refuse their requests to license). To Google, to Representative DeFazio, and to others, these patent licensing companies are “patent trolls” who are destroyers of all things good—and the SHIELD Act will save us all from these dastardly “trolls” (is a troll anything but dastardly?).

As I and other scholars have pointed out, the “patent troll” moniker is really just a rhetorical epithet that lacks even an agreed-upon definition.  The term is used loosely enough that it sometimes covers and sometimes excludes universities, Thomas Edison, Elias Howe (the inventor of the lockstitch in 1843), Charles Goodyear (the inventor of vulcanized rubber in 1839), and even companies like IBM.  How can we be expected to have a reasonable discussion about patent policy when our basic terms of public discourse shift in meaning from blog to blog, article to article, speaker to speaker?  The same is true of the new term, “Patent Assertion Entities,” which sounds more neutral, but has the same problem in that it also lacks any objective definition or usage.

Setting aside this basic problem of terminology for the moment, the SHIELD Act is anything but a “smarter patent law” (to quote President Obama). Some patent scholars, like Michael Risch, have begun to point out some of the serious problems with the SHIELD Act, such as its selectively discriminatory treatment of certain types of patent-owners.  Moreover, as Professor Risch ably identifies, this legislation was so cleverly drafted to cover only a limited set of a specific type of patent-owner that it ended up being too clever. Unlike the previous version introduced last year, the 2013 SHIELD Act does not even apply to the flavor-of-the-day outrage over patent licensing companies—the owner of the podcast patent. (Although you wouldn’t know this if you read the supporters of the SHIELD Act like the EFF who falsely claim that this law will stop patent-owners like the podcast patent-owning company.)

There are many things wrong with the SHIELD Act, but one thing that I want to highlight here is that it based on a falsehood: the oft-repeated claim that two Boston University researchers have proven in a study that “patent troll suits cost American technology companies over $29 billion in 2011 alone.”  This is what Rep. DeFazio said when he introduced the SHIELD Act on March 1. This claim was repeated yesterday by House Members during a hearing on “Abusive Patent Litigation.” The claim that patent licensing companies cost American tech companies $29 billion in a single year (2011) has become gospel since this study, The Direct Costs from NPE Disputes, was released last summer on the Internet. (Another name of patent licensing companies is “Non Practicing Entity” or “NPE.”)  A Google search of “patent troll 29 billion” produces 191,000 hits. A Google search of “NPE 29 billion” produces 605,000 hits. Such is the making of conventional wisdom.

The problem with conventional wisdom is that it is usually incorrect, and the study that produced the claim of “$29 billion imposed by patent trolls” is no different. The $29 billion cost study is deeply and fundamentally flawed, as explained by two noted professors, David Schwartz and Jay Kesan, who are also highly regarded for their empirical and economic work in patent law.  In their essay, Analyzing the Role of Non-Practicing Entities in the Patent System, also released late last summer, they detailed at great length serious methodological and substantive flaws in The Direct Costs from NPE Disputes. Unfortunately, the Schwartz and Kesan essay has gone virtually unnoticed in the patent policy debates, while the $29 billion cost claim has through repetition become truth.

In the hope that at least a few more people might discover the Schwartz and Kesan essay, I will briefly summarize some of their concerns about the study that produced the $29 billion cost figure.  This is not merely an academic exercise.  Since Rep. DeFazio explicitly relied on the $29 billion cost claim to justify the SHIELD Act, and he and others keep repeating it, it’s important to know if it is true, because it’s being used to drive proposed legislation in the real world.  If patent legislation is supposed to secure innovation, then it behooves us to know if this legislation is based on actual facts. Yet, as Schwartz and Kesan explain in their essay, the $29 billion cost claim is based on a study that is fundamentally flawed in both substance and methodology.

In terms of its methodological flaws, the study supporting the $29 billion cost claim employs an incredibly broad definition of “patent troll” that covers almost every person, corporation or university that sues someone for infringing a patent that it is not currently being used to manufacture a product at that moment.  While the meaning of the “patent troll” epithet shifts depending on the commentator, reporter, blogger, or scholar who is using it, one would be extremely hard pressed to find anyone embracing this expansive usage in patent scholarship or similar commentary today.

There are several reasons why the extremely broad definition of “NPE” or “patent troll” in the study is unusual even compared to uses of this term in other commentary or studies. First, and most absurdly, this definition, by necessity, includes every university in the world that sues someone for infringing one of its patents, as universities don’t manufacture goods.  Second, it includes every individual and start-up company who plans to manufacture a patented invention, but is forced to sue an infringer-competitor who thwarted these business plans by its infringing sales in the marketplace.  Third, it includes commercial firms throughout the wide-ranging innovation industries—from high tech to biotech to traditional manufacturing—that have at least one patent among a portfolio of thousands that is not being used at the moment to manufacture a product because it may be “well outside the area in which they make products” and yet they sue infringers of this patent (the quoted language is from the study). So, according to this study, every manufacturer becomes an “NPE” or “patent troll” if it strays too far from what somebody subjectively defines as its rightful “area” of manufacturing. What company is not branded an “NPE” or “patent troll” under this definition, or will necessarily become one in the future given inevitable changes in one’s business plans or commercial activities? This is particularly true for every person or company whose only current opportunity to reap the benefit of their patented invention is to license the technology or to litigate against the infringers who refuse license offers.

So, when almost every possible patent-owning person, university, or corporation is defined as a “NPE” or “patent troll,” why are we surprised that a study that employs this virtually boundless definition concludes that they create $29 billion in litigation costs per year?  The only thing surprising is that the number isn’t even higher!

There are many other methodological flaws in the $29 billion cost study, such as its explicit assumption that patent litigation costs are “too high” without providing any comparative baseline for this conclusion.  What are the costs in other areas of litigation, such as standard commercial litigation, tort claims, or disputes over complex regulations?  We are not told.  What are the historical costs of patent litigation?  We are not told.  On what basis then can we conclude that $29 billion is “too high” or even “too low”?  We’re supposed to be impressed by a number that exists in a vacuum and that lacks any empirical context by which to evaluate it.

The $29 billion cost study also assumes that all litigation transaction costs are deadweight losses, which would mean that the entire U.S. court system is a deadweight loss according to the terms of this study.  Every lawsuit, whether a contract, tort, property, regulatory or constitutional dispute is, according to the assumption of the $29 billion cost study, a deadweight loss.  The entire U.S. court system is an inefficient cost imposed on everyone who uses it.  Really?  That’s an assumption that reduces itself to absurdity—it’s a self-imposed reductio ad absurdum!

In addition to the methodological problems, there are also serious concerns about the trustworthiness and quality of the actual data used to reach the $29 billion claim in the study.  All studies rely on data, and in this case, the $29 billion study used data from a secret survey done by RPX of its customers.  For those who don’t know, RPX’s business model is to defend companies against these so-called “patent trolls.”  So, a company whose business model is predicated on hyping the threat of “patent trolls” does a secret survey of its paying customers, and it is now known that RPX informed its customers in the survey that their answers would be used to lobby for changes in the patent laws.

As every reputable economist or statistician will tell you, such conditions encourage exaggeration and bias in a data sample by motivating participation among those who support changes to the patent law.  Such a problem even has a formal name in economic studies: self-selection bias.  But one doesn’t need to be an economist or statistician to be able to see the problems in relying on the RPX data to conclude that NPEs cost $29 billion per year. As the classic adage goes, “Something is rotten in the state of Denmark.”

Even worse, as I noted above, the RPX survey was confidential.  RPX has continued to invoke “client confidences” in refusing to disclose its actual customer survey or the resulting data, which means that the data underlying the $29 billion claim is completely unknown and unverifiable for anyone who reads the study.  Don’t worry, the researchers have told us in a footnote in the study, they looked at the data and confirmed it is good.  Again, it doesn’t take economic or statistical training to know that something is not right here. Another classic cliché comes to mind at this point: “it’s not the crime, it’s the cover-up.”

In fact, keeping data secret in a published study violates well-established and longstanding norms in all scientific research that data should always be made available for testing and verification by third parties.  No peer-reviewed medical or scientific journal would publish a study based on a secret data set in which the researchers have told us that we should simply trust them that the data is accurate.  Its use of secret data probably explains why the $29 billion study has not yet appeared in a peer-reviewed journal, and, if economics has any claim to being an actual science, this study never will.  If a study does not meet basic scientific standards for verifying data, then why are Reps. DeFazio and Chaffetz relying on it to propose national legislation that directly impacts the patent system and future innovation?  If heads-in-the-clouds academics would know to reject such a study as based on unverifiable, likely biased claptrap, then why are our elected officials embracing it to create real-world legal rules?

And, to continue our running theme of classic clichés, there’s the rub. The more one looks at the actual legal requirements of the SHIELD Act, the more, in the words of Professor Risch, one is left “scratching one’s head” in bewilderment.  The more one looks at the supporting studies and arguments in favor of the SHIELD Act, the more one is left, in the words of Professor Risch, “scratching one’s head.”  The more and more one thinks about the SHIELD Act, the more one realizes what it is—legislation that has been crafted at the behest of the politically powerful (such as an Internet company who can get the President to do a special appearance on its own social media website) to have the government eliminate a smaller, publicly reviled, and less politically-connected group.

In short, people may have legitimate complaints about the ways in which the court system in the U.S. generally has problems.  Commentators and Congresspersons could even consider revising the general legal rules governing patent ligtiation for all plaintiffs and defendants to make the ligitation system work better or more efficiently (by some established metric).   Professor Risch has done exactly this in a recent Wired op-ed.  But it’s time to call a spade a spade: the SHIELD Act is a classic example of rent-seeking, discriminatory legislation.