Archives For Congress

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

One of the key recommendations of the House Judiciary Committee’s antitrust report which seems to have bipartisan support (see Rep. Buck’s report) is shifting evidentiary burdens of proof to defendants with “monopoly power.” These recommended changes are aimed at helping antitrust enforcers and private plaintiffs “win” more. The result may well be more convictions, more jury verdicts, more consent decrees, and more settlements, but there is a cost. 

Presumption of illegality for certain classes of defendants unless they can prove otherwise is inconsistent with the American traditions of the presumption of innocence and allowing persons to dispose of their property as they wish. Forcing antitrust defendants to defend themselves from what is effectively a presumption of guilt will create an enormous burden upon them. But this will be felt far beyond just antitrust defendants. Consumers who would have benefited from mergers that are deterred or business conduct that is prevented will have those benefits foregone.

The Presumption of Liberty in American Law

The Presumption of Innocence

There is nothing wrong with presumptions in law as a general matter. For instance, one of the most important presumptions in American law is that criminal defendants are presumed innocent until proven guilty. Prosecutors bear the burden of proof, and must prove guilt beyond a reasonable doubt. Even in the civil context, plaintiffs, whether public or private, have the burden of proving a violation of the law, by the preponderance of the evidence. In either case, the defendant is not required to prove they didn’t violate the law.

Fundamentally, the presumption of innocence is about liberty. As William Blackstone put it in his Commentaries on the Law of England centuries ago: “the law holds that it is better that ten guilty persons escape than that one innocent suffer.” 

In economic terms, society must balance the need to deter bad conduct, however defined, with not deterring good conduct. In a world of uncertainty, this includes the possibility that decision-makers will get it wrong. For instance, if a mere allegation of wrongdoing places the burden upon a defendant to prove his or her innocence, much good conduct would be deterred out of fear of false allegations. In this sense, the presumption of innocence is important: it protects the innocent from allegations of wrongdoing, even if that means in some cases the guilty escape judgment.

Presumptions in Property, Contract, and Corporate Law

Similarly, presumptions in other areas of law protect liberty and are against deterring the good in the name of preventing the bad. For instance, the presumption when it comes to how people dispose of their property is that unless a law says otherwise, they may do as they wish. In other words, there is no presumption that a person may not use their property in a manner they wish to do so. The presumption is liberty, unless a valid law proscribes behavior. The exceptions to this rule typically deal with situations where a use of property could harm someone else. 

In contracts, the right of persons to come to a mutual agreement is the general rule, with rare exceptions. The presumption is in favor of enforcing voluntary agreements. Default rules in the absence of complete contracting supplement these agreements, but even the default rules can be contracted around in most cases.

Bringing the two together, corporate law—essentially the nexus of contract law and property law— allows persons to come together to dispose of property and make contracts, supplying default rules which can be contracted around. The presumption again is that people are free to do as they choose with their own property. The default is never that people can’t create firms to buy or sell or make agreements.

A corollary right of the above is that people may start businesses and deal with others on whatever basis they choose, unless a generally applicable law says otherwise. In fact, they can even buy other businesses. Mergers and acquisitions are generally allowed by the law. 

Presumptions in Antitrust Law

Antitrust is a generally applicable set of laws which proscribe how people can use their property. But even there, the presumption is not that every merger or act by a large company is harmful. 

On the contrary, antitrust laws allow groups of people to dispose of property as they wish unless it can be shown that a firm has “market power” that is likely to be exercised to the detriment of competition or consumers. Plaintiffs, whether public or private, bear the burden of proving all the elements of the antitrust violation alleged.

In particular, antitrust law has incorporated the error cost framework. This framework considers the cost of getting decisions wrong. Much like the presumption of innocence is based on the tradeoff of allowing some guilty persons to go unpunished in order to protect the innocent, the error cost framework notes there is tradeoff between allowing some anticompetitive conduct to go unpunished in order to protect procompetitive conduct. American antitrust law seeks to avoid the condemnation of procompetitive conduct more than it avoids allowing the guilty to escape condemnation. 

For instance, to prove a merger or acquisition would violate the antitrust laws, a plaintiff must show the transaction will substantially lessen competition. This involves defining the market, that the defendant has power over that market, and that the transaction would lessen competition. While concentration of the market is an important part of the analysis, antitrust law must consider the effect on consumer welfare as a whole. The law doesn’t simply condemn mergers or acquisitions by large companies just because they are large.

Similarly, to prove a monopolization claim, a plaintiff must establish the defendant has “monopoly power” in the relevant market. But monopoly power isn’t enough. As stated by the Supreme Court in Trinko:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period— is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth. To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

The plaintiff must also prove the defendant has engaged in the “willful acquisition or maintenance of [market] power, as distinguished from growth or development as a consequence of a superior product, business acumen, or historical accident.” Antitrust law is careful to avoid mistaken inferences and false condemnations, which are especially costly because they “chill the very conduct antitrust laws are designed to protect.”

The presumption isn’t against mergers or business conduct even when those businesses are large. Antitrust law only condemns mergers or business conduct when it is likely to harm consumers.

How Changing Antitrust Presumptions will Harm Society

In light of all of this, the House Judiciary Committee’s Investigation of Competition in Digital Markets proposes some pretty radical departures from the law’s normal presumption in favor of people disposing property how they choose. Unfortunately, the minority report issued by Representative Buck agrees with the recommendations to shift burdens onto antitrust defendants in certain cases.

One of the recommendations from the Subcommittee is that Congress:

“codify[] bright-line rules for merger enforcement, including structural presumptions. Under a structural presumption, mergers resulting in a single firm controlling an outsized market share, or resulting in a significant increase in concentration, would be presumptively prohibited under Section 7 of the Clayton Act. This structural presumption would place the burden of proof upon the merging parties to show that the merger would not reduce competition. A showing that the merger would result in efficiencies should not be sufficient to overcome the presumption that it is anticompetitive. It is the view of Subcommittee staff that the 30% threshold established by the Supreme Court in Philadelphia National Bank is appropriate, although a lower standard for monopsony or buyer power claims may deserve consideration by the Subcommittee. By shifting the burden of proof to the merging parties in cases involving concentrated markets and high market shares, codifying the structural presumption would help promote the efficient allocation of agency resources and increase the likelihood that anticompetitive mergers are blocked. (emphasis added)

Under this proposal, in cases where concentration meets an arbitrary benchmark based upon the market definition, the presumption will be that the merger is illegal. Defendants will now bear the burden of proof to show the merger won’t reduce competition, without even getting to refer to efficiencies that could benefit consumers. 

Changing the burden of proof to be against criminal defendants would lead to more convictions of guilty people, but it would also lead to a lot more false convictions of innocent defendants. Similarly, changing the burden of proof to be against antitrust defendants would certainly lead to more condemnations of anticompetitive mergers, but it would also lead to the deterrence of a significant portion of procompetitive mergers.

So yes, if adopted, plaintiffs would likely win more as a result of these proposed changes, including in cases where mergers are anticompetitive. But this does not necessarily mean it would be to the benefit of larger society. 

Antitrust has evolved over time to recognize that concentration alone is not predictive of likely competitive harm in merger analysis. Both the horizontal merger guidelines and the vertical merger guidelines issued by the FTC and DOJ emphasize the importance of fact-specific inquiries into competitive effects, and not just a reliance on concentration statistics. This reflected a long-standing bipartisan consensus. The HJC’s majority report overturns this consensus by suggesting a return to the structural presumptions which have largely been rejected in antitrust law.

The HJC majority report also calls for changes in presumptions when it comes to monopolization claims. For instance, the report calls on Congress to consider creating a statutory presumption of dominance by a seller with a market share of 30% or more and a presumption of dominance by a buyer with a market share of 25% or more. The report then goes on to suggest overturning a number of precedents dealing with monopolization claims which in their view restricted claims of tying, predatory pricing, refusals to deal, leveraging, and self-preferencing. In particular, they call on Congress to “[c]larify[] that ‘false positives’ (or erroneous enforcement) are not more costly than ‘false negatives’ (erroneous non-enforcement), and that, when relating to conduct or mergers involving dominant firms, ‘false negatives’ are costlier.”

This again completely turns the ordinary presumptions about innocence and allowing people to dispose of the property as they see fit on their head. If adopted, defendants would largely have to prove their innocence in monopolization cases if their shares of the market are above a certain threshold. 

Moreover, the report calls for Congress to consider making conduct illegal even if it “can be justified as an improvement for consumers.” It is highly likely that the changes proposed will harm consumer welfare in many cases, as the focus changes from economic efficiency to concentration. 

Conclusion

The HJC report’s recommendations on changing antitrust presumptions should be rejected. The harms will be felt not only by antitrust defendants, who will be much more likely to lose regardless of whether they have violated the law, but by consumers whose welfare is no longer the focus. The result is inconsistent with the American tradition that presumes innocence and the ability of people to dispose of their property as they see fit. 

During last week’s antitrust hearing, Representative Jamie Raskin (D-Md.) provided a sound bite that served as a salvo: “In the 19th century we had the robber barons, in the 21st century we get the cyber barons.” But with sound bites, much like bumper stickers, there’s no room for nuance or scrutiny.

The news media has extensively covered the “questioning” of the CEOs of Facebook, Google, Apple, and Amazon (collectively “Big Tech”). Of course, most of this questioning was actually political posturing with little regard for the actual answers or antitrust law. But just like with the so-called robber barons, the story of Big Tech is much more interesting and complex. 

The myth of the robber barons: Market entrepreneurs vs. political entrepreneurs

The Robber Barons: The Great American Capitalists, 1861–1901 (1934) by Matthew Josephson, was written in the midst of America’s Great Depression. Josephson, a Marxist with sympathies for the Soviet Union, made the case that the 19th century titans of industry were made rich on the backs of the poor during the industrial revolution. This idea that the rich are wealthy due to their robbing of the rest of us is an idea that has long outlived Josephson and Marx down to the present day, as exemplified by the writings of Matt Stoller and the politics of the House Judiciary Committee.

In his Myth of the Robber Barons, Burton Folsom, Jr. makes the case that much of the received wisdom on the great 19th century businessmen is wrong. He distinguishes between the market entrepreneurs, which generated wealth by selling newer, better, or less expensive products on the free market without any government subsidies, and the political entrepreneurs, who became rich primarily by influencing the government to subsidize their businesses, or enacting legislation or regulation that harms their competitors. 

Folsom narrates the stories of market entrepreneurs, like Thomas Gibbons & Cornelius Vanderbilt (steamships), James Hill (railroads), the Scranton brothers (iron rails), Andrew Carnegie & Charles Schwab (steel), and John D. Rockefeller (oil), who created immense value for consumers by drastically reducing the prices of the goods and services their companies provided. Yes, these men got rich. But the value society received was arguably even greater. Wealth was created because market exchange is a positive-sum game.

On the other hand, the political entrepreneurs, like Robert Fulton & Edward Collins (steamships), and Leland Stanford & Henry Villard (railroads), drained societal resources by using taxpayer money to create inefficient monopolies. Because they were not subject to the same market discipline due to their favored position, cutting costs and prices were less important to them than the market entrepreneurs. Their wealth was at the expense of the rest of society, because political exchange is a zero-sum game.

Big Tech makes society better off

Today’s titans of industry, i.e. Big Tech, have created enormous value for society. This is almost impossible to deny, though some try. From zero-priced search on Google, to the convenience and price of products on Amazon, to the nominally free social network(s) of Facebook, to the plethora of options in Apple’s App Store, consumers have greatly benefited from Big Tech. Consumers flock to use Google, Facebook, Amazon, and Apple for a reason: they believe they are getting a great deal. 

By and large, the techlash comes from “intellectuals” who think they know better than consumers acting in the marketplace about what is good for them. And as noted by Alec Stapp, Americans in opinion polls consistently put a great deal of trust in Big Tech, at least compared to government institutions:

One of the basic building blocks of economics is that both parties benefit from voluntary exchanges ex ante, or else they would not be willing to engage in it. The fact that consumers use Big Tech to the extent they do is overwhelming evidence of their value. Obfuscations like “market power” mislead more than they inform. In the absence of governmental barriers to entry, consumers voluntarily choosing Big Tech does not mean they have power, it means they provide great service.

Big Tech companies are run by entrepreneurs who must ultimately answer to consumers. In a market economy, profits are a signal that entrepreneurs have successfully brought value to society. But they are also a signal to potential competitors. If Big Tech companies don’t continue to serve the interests of their consumers, they risk losing them to competitors.

Big Tech’s CEOs seem to get this. For instance, Jeff Bezos’ written testimony emphasized the importance of continual innovation at Amazon as a reason for its success:

Since our founding, we have strived to maintain a “Day One” mentality at the company. By that I mean approaching everything we do with the energy and entrepreneurial spirit of Day One. Even though Amazon is a large company, I have always believed that if we commit ourselves to maintaining a Day One mentality as a critical part of our DNA, we can have both the scope and capabilities of a large company and the spirit and heart of a small one. 

In my view, obsessive customer focus is by far the best way to achieve and maintain Day One vitality. Why? Because customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and a constant desire to delight customers drives us to constantly invent on their behalf. As a result, by focusing obsessively on customers, we are internally driven to improve our services, add benefits and features, invent new products, lower prices, and speed up shipping times—before we have to. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it. And I could give you many such examples. Not every business takes this customer-first approach, but we do, and it’s our greatest strength.

The economics of multi-sided platforms: How Big Tech does it

Economically speaking, Big Tech companies are (mostly) multi-sided platforms. Multi-sided platforms differ from regular firms in that they have to serve two or more of these distinct types of consumers to generate demand from any of them.

Economist David Evans, who has done as much as any to help us understand multi-sided platforms, has identified three different types:

  1. Market-Makers enable members of distinct groups to transact with each other. Each member of a group values the service more highly if there are more members of the other group, thereby increasing the likelihood of a match and reducing the time it takes to find an acceptable match. (Amazon and Apple’s App Store)
  2. Audience-Makers match advertisers to audiences. Advertisers value a service more if there are more members of an audience who will react positively to their messages; audiences value a service more if there is more useful “content” provided by audience-makers. (Google, especially through YouTube, and Facebook, especially through Instagram)
  3. Demand-Coordinators make goods and services that generate indirect network effects across two or more groups. These platforms do not strictly sell “transactions” like a market maker or “messages” like an audience-maker; they are a residual category much like irregular verbs – numerous, heterogeneous, and important. Software platforms such as Windows and the Palm OS, payment systems such as credit cards, and mobile telephones are demand coordinators. (Android, iOS)

In order to bring value, Big Tech has to consider consumers on all sides of the platform they operate. Sometimes, this means consumers on one side of the platform subsidize the other. 

For instance, Google doesn’t charge its users to use its search engine, YouTube, or Gmail. Instead, companies pay Google to advertise to their users. Similarly, Facebook doesn’t charge the users of its social network, advertisers on the other side of the platform subsidize them. 

As their competitors and critics love to point out, there are some complications in that some platforms also compete in the markets they create. For instance, Apple does place its own apps inits App Store, and Amazon does engage in some first-party sales on its platform. But generally speaking, both Apple and Amazon act as matchmakers for exchanges between users and third parties.

The difficulty for multi-sided platforms is that they need to balance the interests of each part of the platform in a way that maximizes its value. 

For Google and Facebook, they need to balance the interests of users and advertisers. In the case of each, this means a free service for users that is subsidized by the advertisers. But the advertisers gain a lot of value by tailoring ads based upon search history, browsing history, and likes and shares. For Apple and Amazon they need to create platforms which are valuable for buyers and sellers, and balance how much first-party competition they want to have before they lose the benefits of third-party sales.

There are no easy answers to creating a search engine, a video service, a social network, an App store, or an online marketplace. Everything from moderation practices, to pricing on each side of the platform, to the degree of competition from the platform operators themselves needs to be balanced right or these platforms would lose participants on one side of the platform or the other to competitors. 

Conclusion

Representative Raskin’s “cyber barons” were raked through the mud by Congress. But much like the falsely identified robber barons of the 19th century who were truly market entrepreneurs, the Big Tech companies of today are wrongfully maligned.

No one is forcing consumers to use these platforms. The incredible benefits they have brought to society through market processes shows they are not robbing anyone. Instead, they are constantly innovating and attempting to strike a balance between consumers on each side of their platform. 

The myth of the cyber barons need not live on any longer than last week’s farcical antitrust hearing.

Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing  technology resources available to Congress are relatively modest. 

Although a new OTA is unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails. 

The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem. 

The OTA is a poor fit for the modern world

The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls —  have sought to revive the OTA. 

To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA. 

Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process. 

A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”

The upshot?

A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges

The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties. 

The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.

According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive[] to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.

The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports. 

A new OTA unnecessarily increases the surface area for regulatory capture

Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:

This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.

Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.

The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable. 

The real problem to solve is a lack of modern governance

Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents. 

Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.

Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws. 

The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.  

It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that

[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.

Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing. 

Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood: 

Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.

The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory  bases covered —  has little relevance.

The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.

*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

Gus Hurwitz is Assistant Professor of Law at University of Nebraska College of Law

Administrative law really is a strange beast. My last post explained this a bit, in the context of Chevron. In this post, I want to make this point in another context, explaining how utterly useless a policy statement can be. Our discussion today has focused on what should go into a policy statement – there seems to be general consensus that one is a good idea. But I’m not sure that we have a good understanding of how little certainty a policy statement offers.

Administrative Stare Decisis?

I alluded in my previous post to the absence of stare decisis in the administrative context. This is one of the greatest differences between judicial and administrative rulemaking: agencies are not bound by either prior judicial interpretations of their statutes, or even by their own prior interpretations. These conclusions follow from relatively recent opinions – Brand-X in 2005 and Fox I in 2007 – and have broad implications for the relationship between courts and agencies.

In Brand-X, the Court explained that a “court’s prior judicial construction of a statute trumps an agency construction otherwise entitled to Chevron deference only if the prior court decision holds that its construction follows from the unambiguous terms of the statute and thus leaves no room for agency discretion.” This conclusion follows from a direct application of Chevron: courts are responsible for determining whether a statute is ambiguous; agencies are responsible for determining the (reasonable) meaning of a statute that is ambiguous.

Not only are agencies not bound by a court’s prior interpretations of an ambiguous statute – they’re not even bound by their own prior interpretations!

In Fox I, the Court held that an agency’s own interpretation of an ambiguous statute impose no special obligations should the agency subsequently change its interpretation.[1] It may be necessary to acknowledge the prior policy; and factual findings upon which the new policy is based that contradict findings upon which the prior policy was based may need to be explained.[2] But where a statute may be interpreted in multiple ways – that is, in any case where the statute is ambiguous – Congress, and by extension its agencies, is free to choose between those alternative interpretations. The fact that an agency previously adopted one interpretation does not necessarily render other possible interpretations any less reasonable; the mere fact that one was previously adopted therefore, on its own, cannot act as a bar to subsequent adoption of a competing interpretation.

What Does This Mean for Policy Statements?

In a contentious policy environment – that is, one where the prevailing understanding of an ambiguous law changes with the consensus of a three-Commissioner majority – policy statements are worth next to nothing. Generally, the value of a policy statement is explaining to a court the agency’s rationale for its preferred construction of an ambiguous statute. Absent such an explanation, a court is likely to find that the construction was not sufficiently reasoned to merit deference. That is: a policy statement makes it easier for an agency to assert a given construction of a statute in litigation.

But a policy statement isn’t necessary to make that assertion, or for an agency to receive deference. Absent a policy statement, the agency needs to demonstrate to the court that its interpretation of the statute is sufficiently reasoned (and not merely a strategic interpretation adopted for the purposes of the present litigation).

And, more important, a policy statement in no way prevents an agency from changing its interpretation. Fox I makes clear that an agency is free to change its interpretations of a given statute. Prior interpretations – including prior policy statements – are not a bar to such changes. Prior interpretations also, therefore, offer little assurance to parties subject to any given interpretation.

Are Policy Statements entirely Useless?

Policy statements may not be entirely useless. The likely front on which to challenge an unexpected change agency interpretation of its statute is on Due Process or Notice grounds. The existence of a policy statement may make it easier for a party to argue that a changed interpretation runs afoul of Due Process or Notice requirements. See, e.g., Fox II.

So there is some hope that a policy statement would be useful. But, in the context of Section 5 UMC claims, I’m not sure how much comfort this really affords. Regulatory takings jurisprudence gives agencies broad power to seemingly-contravene Due Process and Notice expectations. This is largely because of the nature of relief available to the FTC: injunctive relief, such as barring certain business practices, even if it results in real economic losses, is likely to survive a regulatory takings challenge, and therefore also a Due Process challenge.  Generally, the Due Process and Notice lines of argument are best suited against fines and similar retrospective remedies; they offer little comfort against prospective remedies like injunctions.

Conclusion

I’ll conclude the same way that I did my previous post, with what I believe is the most important takeaway from this post: however we proceed, we must do so with an understanding of both antitrust and administrative law. Administrative law is the unique, beautiful, and scary beast that governs the FTC – those who fail to respect its nuances do so at their own peril.


[1] Fox v. FCC, 556 U.S. 502, 514–516 (2007) (“The statute makes no distinction [] between initial agency action and subsequent agency action undoing or revising that action. … And of course the agency must show that there are good reasons for the new policy. But it need not demonstrate to a court’s satisfaction that the reasons for the new policy are better than the reasons for the old one; it suffices that the new policy is permissible under the statute, that there are good reasons for it, and that the agency believes it to be better, which the conscious change of course adequately indicates.”).

[2] Id. (“To be sure, the requirement that an agency provide reasoned explanation for its action would ordinarily demand that it display awareness that it is changing position. … This means that the agency need not always provide a more detailed justification than what would suffice for a new policy created on a blank slate. Sometimes it must—when, for example, its new policy rests upon factual findings that contradict those which underlay its prior policy; or when its prior policy has engendered serious reliance interests that must be taken into account. It would be arbitrary or capricious to ignore such matters. In such cases it is not that further justification is demanded by the mere fact of policy change; but that a reasoned explanation is needed for disregarding facts and circumstances that underlay or were engendered by the prior policy.”).