Archives For Congress

During last week’s antitrust hearing, Representative Jamie Raskin (D-Md.) provided a sound bite that served as a salvo: “In the 19th century we had the robber barons, in the 21st century we get the cyber barons.” But with sound bites, much like bumper stickers, there’s no room for nuance or scrutiny.

The news media has extensively covered the “questioning” of the CEOs of Facebook, Google, Apple, and Amazon (collectively “Big Tech”). Of course, most of this questioning was actually political posturing with little regard for the actual answers or antitrust law. But just like with the so-called robber barons, the story of Big Tech is much more interesting and complex. 

The myth of the robber barons: Market entrepreneurs vs. political entrepreneurs

The Robber Barons: The Great American Capitalists, 1861–1901 (1934) by Matthew Josephson, was written in the midst of America’s Great Depression. Josephson, a Marxist with sympathies for the Soviet Union, made the case that the 19th century titans of industry were made rich on the backs of the poor during the industrial revolution. This idea that the rich are wealthy due to their robbing of the rest of us is an idea that has long outlived Josephson and Marx down to the present day, as exemplified by the writings of Matt Stoller and the politics of the House Judiciary Committee.

In his Myth of the Robber Barons, Burton Folsom, Jr. makes the case that much of the received wisdom on the great 19th century businessmen is wrong. He distinguishes between the market entrepreneurs, which generated wealth by selling newer, better, or less expensive products on the free market without any government subsidies, and the political entrepreneurs, who became rich primarily by influencing the government to subsidize their businesses, or enacting legislation or regulation that harms their competitors. 

Folsom narrates the stories of market entrepreneurs, like Thomas Gibbons & Cornelius Vanderbilt (steamships), James Hill (railroads), the Scranton brothers (iron rails), Andrew Carnegie & Charles Schwab (steel), and John D. Rockefeller (oil), who created immense value for consumers by drastically reducing the prices of the goods and services their companies provided. Yes, these men got rich. But the value society received was arguably even greater. Wealth was created because market exchange is a positive-sum game.

On the other hand, the political entrepreneurs, like Robert Fulton & Edward Collins (steamships), and Leland Stanford & Henry Villard (railroads), drained societal resources by using taxpayer money to create inefficient monopolies. Because they were not subject to the same market discipline due to their favored position, cutting costs and prices were less important to them than the market entrepreneurs. Their wealth was at the expense of the rest of society, because political exchange is a zero-sum game.

Big Tech makes society better off

Today’s titans of industry, i.e. Big Tech, have created enormous value for society. This is almost impossible to deny, though some try. From zero-priced search on Google, to the convenience and price of products on Amazon, to the nominally free social network(s) of Facebook, to the plethora of options in Apple’s App Store, consumers have greatly benefited from Big Tech. Consumers flock to use Google, Facebook, Amazon, and Apple for a reason: they believe they are getting a great deal. 

By and large, the techlash comes from “intellectuals” who think they know better than consumers acting in the marketplace about what is good for them. And as noted by Alec Stapp, Americans in opinion polls consistently put a great deal of trust in Big Tech, at least compared to government institutions:

One of the basic building blocks of economics is that both parties benefit from voluntary exchanges ex ante, or else they would not be willing to engage in it. The fact that consumers use Big Tech to the extent they do is overwhelming evidence of their value. Obfuscations like “market power” mislead more than they inform. In the absence of governmental barriers to entry, consumers voluntarily choosing Big Tech does not mean they have power, it means they provide great service.

Big Tech companies are run by entrepreneurs who must ultimately answer to consumers. In a market economy, profits are a signal that entrepreneurs have successfully brought value to society. But they are also a signal to potential competitors. If Big Tech companies don’t continue to serve the interests of their consumers, they risk losing them to competitors.

Big Tech’s CEOs seem to get this. For instance, Jeff Bezos’ written testimony emphasized the importance of continual innovation at Amazon as a reason for its success:

Since our founding, we have strived to maintain a “Day One” mentality at the company. By that I mean approaching everything we do with the energy and entrepreneurial spirit of Day One. Even though Amazon is a large company, I have always believed that if we commit ourselves to maintaining a Day One mentality as a critical part of our DNA, we can have both the scope and capabilities of a large company and the spirit and heart of a small one. 

In my view, obsessive customer focus is by far the best way to achieve and maintain Day One vitality. Why? Because customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and a constant desire to delight customers drives us to constantly invent on their behalf. As a result, by focusing obsessively on customers, we are internally driven to improve our services, add benefits and features, invent new products, lower prices, and speed up shipping times—before we have to. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it. And I could give you many such examples. Not every business takes this customer-first approach, but we do, and it’s our greatest strength.

The economics of multi-sided platforms: How Big Tech does it

Economically speaking, Big Tech companies are (mostly) multi-sided platforms. Multi-sided platforms differ from regular firms in that they have to serve two or more of these distinct types of consumers to generate demand from any of them.

Economist David Evans, who has done as much as any to help us understand multi-sided platforms, has identified three different types:

  1. Market-Makers enable members of distinct groups to transact with each other. Each member of a group values the service more highly if there are more members of the other group, thereby increasing the likelihood of a match and reducing the time it takes to find an acceptable match. (Amazon and Apple’s App Store)
  2. Audience-Makers match advertisers to audiences. Advertisers value a service more if there are more members of an audience who will react positively to their messages; audiences value a service more if there is more useful “content” provided by audience-makers. (Google, especially through YouTube, and Facebook, especially through Instagram)
  3. Demand-Coordinators make goods and services that generate indirect network effects across two or more groups. These platforms do not strictly sell “transactions” like a market maker or “messages” like an audience-maker; they are a residual category much like irregular verbs – numerous, heterogeneous, and important. Software platforms such as Windows and the Palm OS, payment systems such as credit cards, and mobile telephones are demand coordinators. (Android, iOS)

In order to bring value, Big Tech has to consider consumers on all sides of the platform they operate. Sometimes, this means consumers on one side of the platform subsidize the other. 

For instance, Google doesn’t charge its users to use its search engine, YouTube, or Gmail. Instead, companies pay Google to advertise to their users. Similarly, Facebook doesn’t charge the users of its social network, advertisers on the other side of the platform subsidize them. 

As their competitors and critics love to point out, there are some complications in that some platforms also compete in the markets they create. For instance, Apple does place its own apps inits App Store, and Amazon does engage in some first-party sales on its platform. But generally speaking, both Apple and Amazon act as matchmakers for exchanges between users and third parties.

The difficulty for multi-sided platforms is that they need to balance the interests of each part of the platform in a way that maximizes its value. 

For Google and Facebook, they need to balance the interests of users and advertisers. In the case of each, this means a free service for users that is subsidized by the advertisers. But the advertisers gain a lot of value by tailoring ads based upon search history, browsing history, and likes and shares. For Apple and Amazon they need to create platforms which are valuable for buyers and sellers, and balance how much first-party competition they want to have before they lose the benefits of third-party sales.

There are no easy answers to creating a search engine, a video service, a social network, an App store, or an online marketplace. Everything from moderation practices, to pricing on each side of the platform, to the degree of competition from the platform operators themselves needs to be balanced right or these platforms would lose participants on one side of the platform or the other to competitors. 

Conclusion

Representative Raskin’s “cyber barons” were raked through the mud by Congress. But much like the falsely identified robber barons of the 19th century who were truly market entrepreneurs, the Big Tech companies of today are wrongfully maligned.

No one is forcing consumers to use these platforms. The incredible benefits they have brought to society through market processes shows they are not robbing anyone. Instead, they are constantly innovating and attempting to strike a balance between consumers on each side of their platform. 

The myth of the cyber barons need not live on any longer than last week’s farcical antitrust hearing.

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Gus Hurwitz is Assistant Professor of Law at University of Nebraska College of Law

Administrative law really is a strange beast. My last post explained this a bit, in the context of Chevron. In this post, I want to make this point in another context, explaining how utterly useless a policy statement can be. Our discussion today has focused on what should go into a policy statement – there seems to be general consensus that one is a good idea. But I’m not sure that we have a good understanding of how little certainty a policy statement offers.

Administrative Stare Decisis?

I alluded in my previous post to the absence of stare decisis in the administrative context. This is one of the greatest differences between judicial and administrative rulemaking: agencies are not bound by either prior judicial interpretations of their statutes, or even by their own prior interpretations. These conclusions follow from relatively recent opinions – Brand-X in 2005 and Fox I in 2007 – and have broad implications for the relationship between courts and agencies.

In Brand-X, the Court explained that a “court’s prior judicial construction of a statute trumps an agency construction otherwise entitled to Chevron deference only if the prior court decision holds that its construction follows from the unambiguous terms of the statute and thus leaves no room for agency discretion.” This conclusion follows from a direct application of Chevron: courts are responsible for determining whether a statute is ambiguous; agencies are responsible for determining the (reasonable) meaning of a statute that is ambiguous.

Not only are agencies not bound by a court’s prior interpretations of an ambiguous statute – they’re not even bound by their own prior interpretations!

In Fox I, the Court held that an agency’s own interpretation of an ambiguous statute impose no special obligations should the agency subsequently change its interpretation.[1] It may be necessary to acknowledge the prior policy; and factual findings upon which the new policy is based that contradict findings upon which the prior policy was based may need to be explained.[2] But where a statute may be interpreted in multiple ways – that is, in any case where the statute is ambiguous – Congress, and by extension its agencies, is free to choose between those alternative interpretations. The fact that an agency previously adopted one interpretation does not necessarily render other possible interpretations any less reasonable; the mere fact that one was previously adopted therefore, on its own, cannot act as a bar to subsequent adoption of a competing interpretation.

What Does This Mean for Policy Statements?

In a contentious policy environment – that is, one where the prevailing understanding of an ambiguous law changes with the consensus of a three-Commissioner majority – policy statements are worth next to nothing. Generally, the value of a policy statement is explaining to a court the agency’s rationale for its preferred construction of an ambiguous statute. Absent such an explanation, a court is likely to find that the construction was not sufficiently reasoned to merit deference. That is: a policy statement makes it easier for an agency to assert a given construction of a statute in litigation.

But a policy statement isn’t necessary to make that assertion, or for an agency to receive deference. Absent a policy statement, the agency needs to demonstrate to the court that its interpretation of the statute is sufficiently reasoned (and not merely a strategic interpretation adopted for the purposes of the present litigation).

And, more important, a policy statement in no way prevents an agency from changing its interpretation. Fox I makes clear that an agency is free to change its interpretations of a given statute. Prior interpretations – including prior policy statements – are not a bar to such changes. Prior interpretations also, therefore, offer little assurance to parties subject to any given interpretation.

Are Policy Statements entirely Useless?

Policy statements may not be entirely useless. The likely front on which to challenge an unexpected change agency interpretation of its statute is on Due Process or Notice grounds. The existence of a policy statement may make it easier for a party to argue that a changed interpretation runs afoul of Due Process or Notice requirements. See, e.g., Fox II.

So there is some hope that a policy statement would be useful. But, in the context of Section 5 UMC claims, I’m not sure how much comfort this really affords. Regulatory takings jurisprudence gives agencies broad power to seemingly-contravene Due Process and Notice expectations. This is largely because of the nature of relief available to the FTC: injunctive relief, such as barring certain business practices, even if it results in real economic losses, is likely to survive a regulatory takings challenge, and therefore also a Due Process challenge.  Generally, the Due Process and Notice lines of argument are best suited against fines and similar retrospective remedies; they offer little comfort against prospective remedies like injunctions.

Conclusion

I’ll conclude the same way that I did my previous post, with what I believe is the most important takeaway from this post: however we proceed, we must do so with an understanding of both antitrust and administrative law. Administrative law is the unique, beautiful, and scary beast that governs the FTC – those who fail to respect its nuances do so at their own peril.


[1] Fox v. FCC, 556 U.S. 502, 514–516 (2007) (“The statute makes no distinction [] between initial agency action and subsequent agency action undoing or revising that action. … And of course the agency must show that there are good reasons for the new policy. But it need not demonstrate to a court’s satisfaction that the reasons for the new policy are better than the reasons for the old one; it suffices that the new policy is permissible under the statute, that there are good reasons for it, and that the agency believes it to be better, which the conscious change of course adequately indicates.”).

[2] Id. (“To be sure, the requirement that an agency provide reasoned explanation for its action would ordinarily demand that it display awareness that it is changing position. … This means that the agency need not always provide a more detailed justification than what would suffice for a new policy created on a blank slate. Sometimes it must—when, for example, its new policy rests upon factual findings that contradict those which underlay its prior policy; or when its prior policy has engendered serious reliance interests that must be taken into account. It would be arbitrary or capricious to ignore such matters. In such cases it is not that further justification is demanded by the mere fact of policy change; but that a reasoned explanation is needed for disregarding facts and circumstances that underlay or were engendered by the prior policy.”).