Archives For Congress

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Gus Hurwitz is Assistant Professor of Law at University of Nebraska College of Law

Administrative law really is a strange beast. My last post explained this a bit, in the context of Chevron. In this post, I want to make this point in another context, explaining how utterly useless a policy statement can be. Our discussion today has focused on what should go into a policy statement – there seems to be general consensus that one is a good idea. But I’m not sure that we have a good understanding of how little certainty a policy statement offers.

Administrative Stare Decisis?

I alluded in my previous post to the absence of stare decisis in the administrative context. This is one of the greatest differences between judicial and administrative rulemaking: agencies are not bound by either prior judicial interpretations of their statutes, or even by their own prior interpretations. These conclusions follow from relatively recent opinions – Brand-X in 2005 and Fox I in 2007 – and have broad implications for the relationship between courts and agencies.

In Brand-X, the Court explained that a “court’s prior judicial construction of a statute trumps an agency construction otherwise entitled to Chevron deference only if the prior court decision holds that its construction follows from the unambiguous terms of the statute and thus leaves no room for agency discretion.” This conclusion follows from a direct application of Chevron: courts are responsible for determining whether a statute is ambiguous; agencies are responsible for determining the (reasonable) meaning of a statute that is ambiguous.

Not only are agencies not bound by a court’s prior interpretations of an ambiguous statute – they’re not even bound by their own prior interpretations!

In Fox I, the Court held that an agency’s own interpretation of an ambiguous statute impose no special obligations should the agency subsequently change its interpretation.[1] It may be necessary to acknowledge the prior policy; and factual findings upon which the new policy is based that contradict findings upon which the prior policy was based may need to be explained.[2] But where a statute may be interpreted in multiple ways – that is, in any case where the statute is ambiguous – Congress, and by extension its agencies, is free to choose between those alternative interpretations. The fact that an agency previously adopted one interpretation does not necessarily render other possible interpretations any less reasonable; the mere fact that one was previously adopted therefore, on its own, cannot act as a bar to subsequent adoption of a competing interpretation.

What Does This Mean for Policy Statements?

In a contentious policy environment – that is, one where the prevailing understanding of an ambiguous law changes with the consensus of a three-Commissioner majority – policy statements are worth next to nothing. Generally, the value of a policy statement is explaining to a court the agency’s rationale for its preferred construction of an ambiguous statute. Absent such an explanation, a court is likely to find that the construction was not sufficiently reasoned to merit deference. That is: a policy statement makes it easier for an agency to assert a given construction of a statute in litigation.

But a policy statement isn’t necessary to make that assertion, or for an agency to receive deference. Absent a policy statement, the agency needs to demonstrate to the court that its interpretation of the statute is sufficiently reasoned (and not merely a strategic interpretation adopted for the purposes of the present litigation).

And, more important, a policy statement in no way prevents an agency from changing its interpretation. Fox I makes clear that an agency is free to change its interpretations of a given statute. Prior interpretations – including prior policy statements – are not a bar to such changes. Prior interpretations also, therefore, offer little assurance to parties subject to any given interpretation.

Are Policy Statements entirely Useless?

Policy statements may not be entirely useless. The likely front on which to challenge an unexpected change agency interpretation of its statute is on Due Process or Notice grounds. The existence of a policy statement may make it easier for a party to argue that a changed interpretation runs afoul of Due Process or Notice requirements. See, e.g., Fox II.

So there is some hope that a policy statement would be useful. But, in the context of Section 5 UMC claims, I’m not sure how much comfort this really affords. Regulatory takings jurisprudence gives agencies broad power to seemingly-contravene Due Process and Notice expectations. This is largely because of the nature of relief available to the FTC: injunctive relief, such as barring certain business practices, even if it results in real economic losses, is likely to survive a regulatory takings challenge, and therefore also a Due Process challenge.  Generally, the Due Process and Notice lines of argument are best suited against fines and similar retrospective remedies; they offer little comfort against prospective remedies like injunctions.

Conclusion

I’ll conclude the same way that I did my previous post, with what I believe is the most important takeaway from this post: however we proceed, we must do so with an understanding of both antitrust and administrative law. Administrative law is the unique, beautiful, and scary beast that governs the FTC – those who fail to respect its nuances do so at their own peril.


[1] Fox v. FCC, 556 U.S. 502, 514–516 (2007) (“The statute makes no distinction [] between initial agency action and subsequent agency action undoing or revising that action. … And of course the agency must show that there are good reasons for the new policy. But it need not demonstrate to a court’s satisfaction that the reasons for the new policy are better than the reasons for the old one; it suffices that the new policy is permissible under the statute, that there are good reasons for it, and that the agency believes it to be better, which the conscious change of course adequately indicates.”).

[2] Id. (“To be sure, the requirement that an agency provide reasoned explanation for its action would ordinarily demand that it display awareness that it is changing position. … This means that the agency need not always provide a more detailed justification than what would suffice for a new policy created on a blank slate. Sometimes it must—when, for example, its new policy rests upon factual findings that contradict those which underlay its prior policy; or when its prior policy has engendered serious reliance interests that must be taken into account. It would be arbitrary or capricious to ignore such matters. In such cases it is not that further justification is demanded by the mere fact of policy change; but that a reasoned explanation is needed for disregarding facts and circumstances that underlay or were engendered by the prior policy.”).