In my last roundup, I puzzled over the Federal Trade Commission’s (FTC) suit to block Amgen’s acquisition of Horizon Therapeutics. The deal involved no product overlaps whatsoever (i.e., no horizontal competition), a target firm acknowledged to have no competitors for the orphan drugs at issue, and nobody poised to enter into competition either.
I won’t recapitulate the details of my confusion here, but I will point to a new piece by Bill MacLeod (a past chair of the American Bar Association’s Antitrust Section and a former FTC bureau director) and David Evans, in which they raise an issue I didn’t cover: “The Federal Trade Commission may have filed the first merger complaint in a generation that could be dismissed for failure to state a claim.” Which would not look good.
Using UDAP the Right Way
Acknowledging that I carp a lot, and that it’s not all about the fish, here’s what looks like a bona fide consumer-protection case and a win for the FTC and consumers alike. On May 25, the commission announced that:
A federal court sided with the Federal Trade Commission, ruling that James D. Noland, Jr. illegally owned and operated two pyramid schemes—Success By Health (SBH) and VOZ Travel—in violation of the FTC Act and that Noland violated a previous federal court order barring him from pyramid schemes and from misrepresenting multilevel marketing participants’ income potential.
I don’t know all the details, but on a quick look at the matter (and the prior case), it appears they went after cons, not legitimate businesses. And that there were indeed serial violations of the law, including violations of an order dating to 2002. There are frauds and cons out there, and the FTC’s UDAP authority is supposed to address many of them. And perhaps should do so more often.
Data Is Forever, Apparently
Something new from the cabinet of curiosities: a complaint about the end times, or the end of time, or something. In a complaint filed the last day of May, one of the commission’s allegations against Amazon is that:
Alexa’s default settings still save children’s (and adults’) voice recordings and transcripts forever, even when a child no longer uses his Alexa profile and it has been inactive for years.
I’ve been around a while, but forever seems like a really long time. Exactly how many recordings have been saved forever? And how do they know? More plausibly on time’s arrow they say that:
Amazon’s privacy disclosures assert that it designed Alexa with privacy in mind, that Amazon will delete users’ voice and geolocation data (and children’s voice data) upon request, and that Amazon carefully limits access to voice data. But until September 2019, Amazon retained children’s voice recordings and transcripts indefinitely unless a parent actively deleted them.
An indefinitely long time is not an infinite length of time.
There’s more, of course. The complaint alleges that representations about parents’ ability to have information deleted on request were not always and fully honored; that is, some sensitive information was maintained, in some form, notwithstanding deletion requests that should have applied to that information. And more broadly, it’s alleged that sensitive information was maintained longer than is “reasonably necessary,” independent of the question of whether there was a deletion request.
Maybe. I don’t know all the ins and outs of the matter, even as alleged (and couldn’t possibly just by reading the relatively brief complaint). One thing that’s interesting is the theory of harm. So far as I can tell, there’s no allegation of a breach, much less one that led to concrete harms to certain kids or their families. And I don’t see any allegation that the information was improperly sold (or given) to third parties, much less that such disclosures led to further harm. Rather, it’s the possession of personal information—in some form—longer than is “reasonably necessary,” and in some cases, allegedly inconsistent with “representations that [Amazon’s] Alexa and Echo devices ‘are designed to protect your privacy’ and ‘provide transparency and control’ (emphasis in original)” that is deemed to be harmful.
There’s a suggestion that a false (or misleading) assurance was made and—critically, under Section 5—that the assurance was material to consumers (parents), who were “substantially harmed” by the degree to which the firm allegedly failed to live up to those assurances. Perhaps, although in that case I find myself wanting much more texture than the complaint provides.
If a data tree stands in the forest, and somebody asked that some leaves or branches connected to their kids be pruned, but not all such leaves or branches were pruned, or a regulator thinks that certain leaves persist longer than they should, but nobody breaches the . . . ok, this version of the tree-falling-the-forest story is getting complicated. But I’m wondering, among other things, whether there’s a “reasonably necessary” standard that’s been set and, if so, where? And my annoyingly numerical inner child is wondering how one measures and computes damages.
Taking Credit for Nothing in Particular
Same cabinet, different curiosity: on May 24, the FTC seemed to advertise a win in a merger matter:
In response to the announcement that Boston Scientific Corporation and non-vascular stent manufacturer M.I. Tech Co., Ltd. have terminated their $230 million purchase agreement, Federal Trade Commission Bureau of Competition Director Holly Vedova issued this statement:
“I am pleased that Boston Scientific and M.I. Tech have abandoned their proposed transaction in response to investigations by FTC staff and our overseas enforcement partners. The FTC will not hesitate to take action in enforcing the antitrust laws to protect patients and doctors. I would like to thank the entire FTC team for their excellent work on this matter.”
Ok, but why? By all means, the FTC is charged to enforce the antitrust laws and, pace certain nontrivial disagreements about what those laws say, should not hesitate to do so. I’m certainly not arguing that the staff didn’t do excellent work. Many truly fine and experienced staff remain at the FTC (an alarming number of departures notwithstanding), and I’ve no reason to assume that staff in the Bureau of Competition did anything but excellent work here.
And I’m not arguing that the FTC was wrong to open the investigation or that they would have been wrong to file a complaint seeking to block the merger, had they done so. I have no idea one way or the other. But I don’t see anything in the press release about a complaint, much less a final decision. True, there’s work preliminary to opening an investigation, so there’s that, but I don’t see any allegation that the merger was likely to harm competition and consumers (“doctors and patients”) or that the commission had decided that it was (or had decided to authorize issuance of a complaint alleging that it was).
So . . . leaving aside the question of whether risk of antitrust liability is what drove the parties to abandon the merger, I wonder whether the announcement concerns a good result or an unfortunate one. If it’s a win for the FTC, was it a win for competition and consumers? We’ve not yet seen a policy statement suggesting that all mergers are anticompetitive, have we? Or is that in the new merger guidelines that will drop whenever they drop?
Erin Go Blech
There are goings-on at the U.S. Justice Department’s Antitrust Division, but this time, for another agency, it’s hands-across-the-water on the Ireland/EU Meta fine. My ICLE colleague Kristian Stout wrote about it here. My two quick takes:
Oy, he’s right to worry; and
Are penalties supposed to bear some relationship to actual harms, or are they supposed to be arbitrary exercises of international taxation?
Over the ocean and into the pockets.
An Interesting Query
Over on that Muskiest of platforms (mea culpa/s’lach lanu/my bad), my ICLE colleague Brian Albrecht tweets an inquiry:
Did we ever learn what the FTC found in its investigation, which started at the end of 2021, into supply chain disruptions and inflation?
It’s a fair question, but I don’t have much useful to say in response. I was at the commission through last August, but cannot share any nonpublic information I might have acquired during my time there. I can, however, point readers to the announcement of the inquiry (also termed a “study”), which contains links to the model 6(b) orders sent to manufacturers, distributors, and retailers (three of each, named by the FTC in the announcement).
And I can reply to my colleague’s question with another question: Do you think that any economists were harmed, or even mildly inconvenienced, in the design of that study? Not counting Marta Wosinska, who didn’t resign her post as director of the FTC’s Bureau of Economics until nearly several months later, and, according to rumors reported in Politico, over issues to do with a different inquiry entirely.
Announced with the sort of breathless press release one might expect for the launch of a new product like Waystar Royco’s Living+, the Federal Communications Commission (FCC) has gone into full-blown spin mode over its latest broadband map.
This is, to be clear, the map that the National Telecommunications and Information Administration (NTIA) will use to allocate $42.5 billion to states from NTIA’s Broadband Equity, Access, and Deployment (BEAD) program. Specific allocations are expected to be announced by June 30.
According to FCC Chair Jessica Rosenworcel, the new map is “light years better” than the last round. A light year is 5.88 trillion miles, or enough to circle the earth more than 236 million times, so that must be quite an improvement. But then the FCC’s release proceeds to walk that claim back, with the assertion that the latest map is merely “another step forward” in an “iterative effort” to develop accurate broadband maps.
To be fair, the new map is a substantial improvement. According to Fierce Telecom, the latest map attempts to identify every household and small business in the country that should have access to high-speed internet service. While this requires granular data down to the level of individual street addresses, earlier versions were limited to U.S. Census block information. The new location-based map has identified more than 114 million locations where fixed broadband could be installed, while prior maps had information for 8.1 million Census blocks.
The biggest attention-grabbing factoid from the FCC is that “[m]ore than 8.3 million U.S. homes and businesses lack access to high-speed broadband.” This figure is at odds with Census surveys that indicate roughly 3 million households don’t have at-home Internet access. Perhaps “and businesses” is doing a lot of that heavy lifting in the new claim.
Telecompetitorreports that the 8.3 million number is, according to an FCC spokesperson, based on the NTIA’s definition of “unserved.” Under this definition, connections with speeds less than 25/3 Mbps are considered to be “unserved” by broadband. In addition, the NTIA considers locations with connections of greater than 25/3 Mbps to be “unserved” if the service is available only from a fixed wireless provider that uses unlicensed spectrum.
The Wireless Internet Service Providers Association (WISPA) has a more optimistic take on the map’s access estimates, arguing that:
[T]he FCC’s new broadband map tells the success story of the vibrant and growing ISP broadband industry — one working 24/7/365 to almost halve the number of unserved locations since 2020. Down from nearly 14 million unserved to 8 million today.
In addition to the map’s detail of unserved locations, tech journalist Mike Conlow has calculated there are 3.6 million homes that are underserved (i.e., with speeds of less than 100/20 Mbps). His Substack account provides downloadable spreadsheets that are worth checking out.
PCMagsuggests one noteworthy change from the last maps is a 50% downgrade in Starlink’s reported speeds. This is a well-known issue. Last year, the FCC rejected Starlink’s application to receive nearly $900 million in broadband funding, citing doubts that the company could provide the grant’s required speeds of 100/20 Mbps.
Writing in Broadband Breakfast, Tom Reid concludes that, despite the improvements, the new maps inflate the availability and speed of many locations:
Broadband improvements have been constrained for decades by inaccurate maps, yet the Federal Communications Commission continues to accept dramatically exaggerated availability and capacity claims from internet service providers. The cumbersome challenge process requires consumers and units of government to prove a negative—a logical fallacy.
Similarly, Joe Valandra, CEO of the Native American-owned firm Tribal Ready, notes that tribal data historically has been excluded or misinterpreted in broadband maps. He urged tribal governments to gather broadband-coverage data for the state mapping process to support grants under the BEAD program.
Based on the short period of time between the latest map’s release and the expected timing of BEAD grants to the states, it’s likely that the allocation of the BEAD funds has already been—or will soon be—established.
Conclusion
It is, of course, always difficult to read the tea leaves, but there are some important things to watch out for as the BEAD process moves forward. There are going to be complaints in various directions: missing locations, locations that don’t exist, incorrect speed data. Nevertheless, the results we have seen thus far are largely in line with what I and my colleagues have been writing for years: about 5-7% of U.S. households are unserved.
On the one hand, public policy has been guided by a reasonable assumption that a small but significant share of the population would benefit from improved (or any) internet access. On the other hand, the latest map reinforces the point we’ve made consistently, which is that much of the story around broadband takeup (rather than access) is focused on that last hardcore of nonadopters. Even when service is available, some households will not adopt at any price.
Going forward, the next big challenge will be to make sure the huge wash of BEAD funding isn’t dissipated by waste, fraud, and abuse. Since these are block grants to the states, it will be very easy to lose sight of how the money is spent across the country. Spending that money well will be critical to closing the digital divide.
What should a competition law for the 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change UK competition law’s approach to large platforms. The bill’s core point is to place the UK Competition and Markets Authority’s (CMA) Digital Markets Unit (DMU) on a statutory footing with relaxed evidentiary standards to regulate so-called “Big Tech” firms more easily. This piece considers some areas to watch as debate regarding the bill unfolds.
Evidence Standards
Since the Magna Carta, the question of evidence for government action has been at the heart of regulation. In that case, of course, a jury of peers decided what the government can do. What is the equivalent rule under the DMCC?
The bill contains a judicial-review standard for challenges to DMU evidence. This amounts to a hands-off approach, and is philosophically quite some distance from the field in Runnymede where King John signed Magna Carta. It is, instead, the social-democratic view that an elite of regulators ought to be empowered for the perceived greater good, subject only to checking that they are within the scope of empowerments and that there is a rational connection between those powers and the decision made. There is, in other words, no jury of peers. On the contrary, there is a panel of experts. And on this view, experts decide what policy to pursue and weigh up the evidence for regulation.
This would be wonderful in a world where everyone could always be trusted. But there are risks in this generosity, as it would also allow even quite weak evidence to prevail. For every Queen Elizabeth II, there is a King John. What happens if a future King John takes over a DMU case? Could those affected by weak evidence standards, or capricious interpretations, push back?
That will not be easy. The risk derives from the classic Wednesbury case, which is the starting point for judicial review of agency action in the UK. The case has similarities to Chevron review in the United States, but without the subsequent developments like the analysis of whether policy is properly promulgated to the agencies, following West Virginia v EPA.
Wednesbury requires a determination to be proven irrational before a court can overturn it. This is a very high bar and amounts to only a sanity test. Black cannot be white, but all shades of grey must be accepted by the court, even if the evidence points strongly against the interpretation. For example, consider the question: is there daylight? There is a great difference between an overcast day, and a sunny day, and among early dawn, midday, and late dusk. Yet on a Wednesbury approach, even the latest daylight hour of the darkest day must be called “sunlight” as, yes, there is daylight. It is essentially a tick-box approach. It trusts the regulator completely on policy: in this case, what counts as bright enough to be called daylight.
At some level, this posture barely trusts the courts at all. It thus foregoes major checks and balances that can helpfully come from the courts. It is myopic, in that sometimes a fresh and neutral pair of eyes is important to ensure sensible, reasonable, and effective approaches. All of us have sometimes focused on a tree and not seen the forest. It can be helpful for a concerned friend to tell us that, provided that the friend is fair, reasonable, and makes the comment based on evidence—and gives us a chance to defend our decision to look only at particular trees.
There has been no suggestion that this fair play is lacking from UK courts, so the bill’s hostility to the tribunal’s role is puzzling. Surely, the DMCC’s intention is not to say: leave me alone when you think I am going wrong?
This has already been criticised in influential commentary, e.g., Florian Mueller’s influential FOSS Patents blog post on the CMA’s recent decision to block the merger of Microsoft and Activision. It is the core reason for the active positions in both the Activision case and the earlier Meta/Giphy case in which, despite a CMA loss on procedural aspects, all policy grounds and evidentiary interpretation withstood challenge.
This will have major implications for worldwide deals and worldwide business practices, not least as it could effectively displace decisions by other jurisdictions to assess evidence more closely, or not to regulate certain aspects of conduct.
There is also the important point that courts’ ability to review evidence has sometimes been very positive. In a nutshell, if the case for regulation is strong, then there should be no problem in the review of evidence by a neutral third party. This can be seen in the leading case on appeal standards in UK telecoms regulation, BT and CityFibre v Ofcom, which—prior to the move to judicial review for such cases—involved deregulation to help encourage innovation in regional business centers (Leeds, Manchester, Birmingham, etc.).
Overreach by Ofcom—in the form of a predatory low-price cap—was holding back regional business development, because it was not profitable to invest in higher value but also higher price next-generation communications systems. This was overturned because of the use of an appeal standard pointing out errors in the evidence base; notably, a requirement for there to be as many as five rivals in an area before it was to be considered competitive, which simply contradicted relevant customer evidence. It is very unlikely that this helpful result would have obtained had the matter been one for hands-off judicial review.
Evidence Framework
Closely related to the first point on judicial review is the question of affirmative evidence standards. Even under a judicial-review standard, the DMU must still apply the factors in the bill. There are significant framings of evidence in the DMCC.
The designation process
This emphasises scale. A worry here might be that scale alone displaces the analysis of affirmative evidence—i.e., “big is bad” analysis. What if, as in the title of the recent provocative book, sometimes Big is Beautiful? That thought seems to be lacking from bill (see s.6(1)(a)). As there is a scenario where companies are large, but still competitively constrained, it would be helpful to consider the consumer impacts at the designation stage. There is no business regulating a company just because it is large if the outcomes are good.
The framing of the countervailing benefit exemption
The bill seeks to provide voice to consumer impacts in its approach to conduct regulation, but the bar is set high. There must be proof of indispensable conduct required for, and proportionate to, a consumer benefit, under s.29(2)(c).
This reverses the burden of proof; companies must prove that they benefit consumers. Normally, this is simply left to the market, unless there is market power. You and I buy products in the marketplace, and this is how consumer benefit is assessed.
In a scenario where this cannot be proven, s.20 would allow conduct orders to require “fair and reasonable terms” (s.20(2)(a)). It does not say to whom or according to whom. This risks allowing the DMU to require reasonable treatment of other businesses, unless the defendant company can prove that consumers benefit. There are strong arguments that this risks harming consumers for the sake of less-efficient competitors.
Consumer evidence
S.44(2) allows, but certainly does not mandate, considering consumer benefits before imposing a pro-competition intervention (PCI). Under s.49(1), such a PCI would have the sweeping market investigation powers in Schedule 8 of the Enterprise Act 2002, which extend to rewriting contracts (Sch 8, rule 2), setting prices (Sch 8, rules 7 and 8) or even to even breaking up companies (Sch 8, rules 12 and 13). It is therefore essential that the evidence base be specified more precisely. There must be a clear link back to the concern that gave rise to the PCI and why the PCI would improve it. There is reference to the ability to test remedies in s.49(3) and (4), but this is not mandatory. Without stronger evidentiary requirements, the PCIs risk discretionary government control over large companies.
Given the breadth of these powers, it would be helpful to require affirmative evidence in relation to asserted entry barriers and competitive foreclosure. If there is truly a desire to dilute the current evidence standards, then what remains could still be specified. Not specifically requiring evidence of impacts on entry and foreclosure, as in the current proposal, is unwise.
Prohibited Conduct
The contemplated codes of conduct could have far-reaching consequences. Risks include inadvertent prohibitions on the development of new products and requirements to stop product development where there is an impact on rivals. See especially s.20(3)(b) (own-product preference), and (h) (impeding switching to others), which arguably could encompass even pro-competitive product integration. There is an acute need for clarification here, as product development and product integration frequently affect rivals, but it is also important for consumers and other innovative businesses.
It is risky to use overly broad definitions here (e.g., “restricting interoperability”) without saying more about what makes for stronger or weaker cases for interoperation (both scenarios exist). Interoperability is important, but evidence relating to it would benefit from definition. Otherwise:
Bill s.20(3)(e) could well capture improvements to product design;
Weasel words like “unfair” use of data (s.20(3)(g)) and “users’… own best interests [according to the DMU]” (s.20(2)(e)) are ripe for rent-seeking; and
The vague reference to “using data unfairly” in s.20(3)(g) could be abused to intervene in data-driven markets on an unprincipled basis.
For example, the data provision could easily be used to hobble ad-funded business models that compete with legacy providers. There are tensions here with the stated aim of the legislative consultation, which was to promote, and not to inhibit, innovation.
A simple remediation here would be to apply a balance-of-evidence test reading on consumer impact, as currently happens with “grey list” consumer-protection matters: the worst risks are “blacklisted” (e.g., excluding liability for death) but more equivocal practices (hidden terms, etc.) are “grey listed.” They are illegal, but only where shown, on balance, to be harmful. That simple change would address many of the evidence concerns, as the structure for evidence weighing would be clarified.
Process protections
The multi-phase due-process protections of the mergers and market-investigations regimes are notably lacking from the conduct and PCI frameworks. For example, a merger matter uses different teams and different timeframes for the initial and final determinations of whether a merger can proceed.
This absence is no surprise, as a major reform elsewhere in the DMCC is to overturn the Competition Appeal Tribunal decision in Apple v CMA, where the CMA had not applied market-investigation timing requirements as interpreted by the Competition Appeal Tribunal, and thus failed statutory timing requirements. The time limits there are designed to prevent multiple bites of the cherry and to prevent strategic use of protracted threats of investigation.
The bill would allow the CMA more flexibility than under the existing market-investigation regime. Is the CMA really asking to change the law, having failed to abide by due-process requirements in an existing one? That would be a bit like asking for a new chair, having refused to sit on a vacant chair right in front of you. Unless this is clarified, the proposal could be misread as a due-process exemption, precisely because the DMU does not want to give due process.
The DMCC’s proponents will argue that the designation process provides timeframes and a first phase element in the cases of “strategic market status” (SMS) firms, with conduct and PCI regulation to follow only if a designation occurs. This, however, overlooks a crucial element: the designation process is effectively a bill of attainder, aimed at particular companies. Where, then, are the due-process rights for those affected? Logically, protections should therefore exceed those in the Enterprise Act market-investigation setting, as those are marketwide, whereas DMU action is aimed at particular firms.
A very sensible check and balance here would be for the DMU to have to make a recommendation for another CMA team to review, as is common in merger-clearance matters.
Benchmarking and Reviews
The proposal contains requirements for review (e.g., s.35 on review of conduct enforcement). The requirements are, however, relatively weak. They amount to an in-house review with no clear framework. There is a very strong argument for a different body to review the work and to prevent mission creep. This may even be welcome to the DMU, as it outsources review work.
The standard for review (e.g., benefits to end users) ought to be clearly specified. The vague reference to “effectiveness” is not this, and has more in common with EU law (e.g., Toshiba) where “effectiveness” of regulation is determined chiefly by the state, and not by the law. (The holding in Toshiba being that of several interpretations, the state is entitled to the most “effective” one, according to… the state.) To the extent that one hopes that the common law regulatory tradition differs, it is puzzling to see the persistence of this statist approach following UK independence from the EU. Entick v Carrington, the DMCC is not.
Other important benchmarking includes reviews of the work of other jurisdictions. For example, the DMU ought not to be given powers that exceed those of EU regulators. Yet arguably, the current proposal does exactly this by omitting some of the structured evidence points in the EU’s Digital Markets Act. There is also a need to ensure international-comity considerations are given due weight, given the broad jurisdictional tests (s.4: UK users, business, or effect). Others—including, notably, jurisdictions from which the largest companies originate—may make different decisions to regulate or not to regulate.
In the case of UK-U.S. relationship, there have been some historic disagreements to this effect. For example, is the DMU really to be the George III of the 21st century, telling U.S. business what to do from across the sea? It is doubtful that this is intended, yet some of the commitments packages already have worldwide effect. Some in America might just say: “No more kings!”
Those with a long memory will remember how strenuously the UK government pushed back on perceived U.S. overreach the other way, notably in the Freddie Laker v British Airways antitrust litigation of the 1980s, and in the 1990s, in the amicus brief submitted by the UK government in Hartford Fire Insurance v California—at the U.S. Supreme Court, no less. It is surely not intended that the UK objected to de facto U.S. and Californian regulation of Lloyds of London, yet wishes to regulate U.S. tech giants on a de facto worldwide basis under UK law?
Public opinion will not take kindly to that type of inconsistency. To the extent that Parliament does not intend worldwide regulation—a sort of British Empire of Big Tech regulation—the extent of the powers ought to be clarified. Indeed, attempting worldwide regulation would very predictably fail (e.g., arms races in regulation between the DMU and EU Commission). An EU-UK regulation race would help nobody, and it can still be avoided by attention to constructive comity considerations.
As the DMCC makes its way through parliamentary committees, those with views on these points will have an excellent opportunity to make themselves known, just as the CMA has done in recent global deals.
After the oral arguments in Twitter v. Taamneh, Geoffrey Manne, Kristian Stout, and I spilled a lot of ink thinking through the law & economics of intermediary liability and how to draw lines when it comes to social-media companies’ responsibility to prevent online harms stemming from illegal conduct on their platforms. With the Supreme Court’s recent decision in Twitter v. Taamneh, it is worth revisiting that post to see what we got right, as well as what the opinion could mean for future First Amendment cases—particularly those concerning Texas and Florida’s common-carriage laws and other challenges to the bounds of Section 230 more generally.
What We Got Right: Necessary Limitations on Secondary Liability Mean the Case Against Twitter Must be Dismissed
In our earlier post, which built on our previous work on the law & economics of intermediary liability, we argued that the law sometimes does and should allow enforcement against intermediaries when they are the least-cost avoider. This is especially true on social-media sites like Twitter, where information costs may be sufficiently low that effective monitoring and control of end users is possible and pseudonymity makes bringing remedies against end users ineffective. We note, however, that there are also costs to intermediary liability. These manifest particularly in “collateral censorship,” which occurs when social-media companies remove user-generated content in order to avoid liability. Thus, a balance must be struck:
From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.
In particular, we noted the need for limiting principles to intermediary liability. As we put it in our Fleites amicus:
In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.
The Court struck very similar notes in its Taamneh opinion regarding the need to limit what they call “secondary liability” under the aiding-and-abetting statute. They note that a person may be responsible at common law for a crime or tort if he helps another complete its commission, but that such liability has never been “boundless.” If it were otherwise, Justice Clarence Thomas wrote for a unanimous Court, “aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance.” Offering the example of a robbery, Thomas argued that if “any assistance of any kind were sufficient to create liability… then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police.”
Here, the Court found important the common law’s distinction between acts of commission and omission:
[O]ur legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue… both criminal and tort law typically sanction only “wrongful conduct,” bad acts, and misfeasance… Some level of blameworthiness is therefore ordinarily required.
If omissions could be held liable in the absence of an independent duty to act, then there would be no limiting principle to prevent the application of liability far beyond what anyone (except for the cop in the final episode of Seinfeld) would believe reasonable:
[I]f aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer. And those who merely deliver mail or transmit emails could be liable for the tortious messages contained therein. For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct.
Applying this to Twitter, the Court first outlined the theories of how Twitter “helped” ISIS:
First, ISIS was active on defendants social-media platforms, which are generally available to the internet-using public with little to no front-end screening by defendants. In other words, ISIS was able to upload content to the platforms and connect with third parties, just like everyone else. Second, defendants’ recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content. And, third, defendants allegedly knew that ISIS was uploading this content to such effect, but took insufficient steps to ensure that ISIS supporters and ISIS-related content were removed from their platforms. Notably, plaintiffs never allege that ISIS used defendants’ platforms to plan or coordinate the Reina attack; in fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter.
The Court rejected each of these allegations as insufficient to establish Twitter’s liability in the absence of an independent duty to act, pointing back to the distinction between an act that affirmatively helped to cause harm and an omission:
[T]he only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).
In our earlier post on Taamneh, we argued that the plaintiff’s “theory of liability would contain no viable limiting principle” and asked “what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account?” The Court made a similar argument, positing that, while “bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends,” the same “could be said of cell phones, email, or the internet generally.” Despite this, “internet or cell service providers [can’t] incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.”
The Court concluded:
At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance.
In sum, since there was no independent duty to act to be found in statute, Twitter could not be found liable under these allegations.
The First Amendment and Common Carriage
It’s notable that the opinion was written by Justice Thomas, who previously invited states to create common-carriage laws that he believed would be consistent with the First Amendment. In his concurrence to the Court’s dismissal (as moot) of the petition for certification in Biden v. First Amendment Institute, Thomas wrote of the market power allegedly held by social-media companies like Twitter, Facebook, and YouTube that:
If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude. Historically, at least two legal doctrines limited a company’s right to exclude.
He proceeded to outline how common-carriage and public-accommodation laws can be used to limit companies from excluding users, suggesting that they would be subject to a lower standard of First Amendment scrutiny under Turner and its progeny.
Among the reasons for imposing common-carriage requirements on social-media companies, Justice Thomas found it important that they are like conduits that carry speech of others:
Though digital instead of physical, they are at bottom communications networks, and they “carry” information from one user to another. A traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way. And unlike newspapers, digital platforms hold themselves out as organizations that focus on distributing the speech of the broader public. Federal law dictates that companies cannot “be treated as the publisher or speaker” of information that they merely distribute. 110 Stat. 137, 47 U. S. C. §230(c).
Thomas also noted the relationship between certain benefits bestowed upon common carriers in exchange for universal service:
In exchange for regulating transportation and communication industries, governments—both State and Federal— have sometimes given common carriers special government favors. For example, governments have tied restrictions on a carrier’s ability to reject clients to “immunity from certain types of suits” or to regulations that make it more difficult for other companies to compete with the carrier (such as franchise licenses). (internal citations omitted)
While Taamneh is not about the First Amendment, some of the language in Thomas’ opinion would suggest that social-media companies are the types of businesses that may receive conduit liability for third-party conduct in exchange for common-carriage requirements.
As noted above, the Court found it important for its holding that there was no aiding-and-abetting by Twitter that “there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.” The Court then compared social-media platforms to “cell phones, email, or the internet generally,” which are classic examples of conduits. In particular, phone service was a common carrier that largely received immunity from liability for its users’ conduct.
Thus, while Taamneh wouldn’t be directly binding in the First Amendment context, this language will likely be cited in the briefs by those supporting the Texas and Florida common-carriage laws when the Supreme Court reviews them.
Section 230 and Neutral Tools
On the other hand—and despite the views Thomas expressed about Section 230 immunity in his Malwarebytesstatement—there is much in the Court’s reasoning in Taamneh that would lead one to believe the justices sees algorithmic recommendations as neutral tools that would not, in and of themselves, restrict a finding of immunity for online platforms.
While the Court’s decision in Gonzalez v. Google basically said it didn’t need to reach the Section 230 question because the allegations failed to state a claim under Taamneh’s reasoning, it appears highly likely that a majority would have found the platforms immune under Section 230 despite their use of algorithmic recommendations. For instance, in Taamneh,the Court disagreed with the assertion that recommendation algorithms amounted to substantial assistance, reasoning that:
By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.
On the other hand, the Court thought it important to its finding that there were no allegations establishing a nexus (due to unusual provision or conscious and selective promotion) between Twitter’s provision of a communications platform and the terrorist activity:
To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group. Cf. Passaic Daily News v. Blair, 63 N. J. 474, 487–488, 308 A. 2d 649, 656 (1973) (publishing employment advertisements that discriminate on the basis of sex could aid and abet the discrimination).
In other words, this language could suggest that, as long as the algorithms are essentially “neutral tools” (to use the language of Roommates.com and its progeny), social-media platforms are immune for third-party speech that they incidentally promote. But if they design their algorithmic recommendations in such a way that suggests the platforms “consciously and selectively” promote illegal content, then they could lose immunity.
Unless other justices share Thomas’ appetite to limit Section 230 immunity substantially in a future case, this language from Taamneh would likely be used to expand the law’s protections to algorithmic recommendations under a Roommates.com/”neutral tools” analysis.
Conclusion
While the Court did not end up issuing the huge Section 230 decision that some expected, the Taamneh decision will be a big deal going forward for the interconnected issues of online intermediary liability, the First Amendment, and Section 230. Language from Justice Thomas’ opinion will likely be cited in the litigation over the Texas and Florida common-carrier laws, as well as future Section 230 cases.
One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for discussion and immediately getting tons of, let us say, spirited replies.
One of Acemoglu’s threads involved a discussion of F.A. Hayek’s famous essay “The Use of Knowledge in Society,” wherein Hayek questions central planners’ ability to acquire and utilize such knowledge. Echoing many other commentators, Acemoglu asks: can supercomputers and artificial intelligence get around Hayek’s concerns?
Coming back to Hayek’s argument, there was another aspect of it that has always bothered me. What if computational power of central planners improved tremendously? Would Hayek then be happy with central planning?
While there are a few different layers to Hayek’s argument, at least one key aspect does not rest at all on computational power. Hayek argues that markets do not require users to have much information in order to make their decisions.
To use Hayek’s example, when the price of tin increases: “All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere.” Knowing whether demand or supply shifted to cause the price increase would be redundant information for the tin user; the price provides all the information about market conditions that the user needs.
To Hayek, this informational role of prices is what makes markets unique (compared to central planning):
The most significant fact about this [market] system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to take the right action.
Good computers, bad computers—it doesn’t matter. Markets just require less information from their individual participants. This was made precise in the 1970s and 1980s in a series of papers on the “informational efficiency” of competitive markets.
This post will give an explanation of what the formal results say. From there, we can go back to debating the relevance for Acemoglu’s argument and the future of central planning with AI.
From Hayek to Hurwicz
First, let’s run through an oversimplified history of economic thought. Hayek developed his argument about information and markets during the socialist-calculation debate between Hayek and Ludwig von Mises on one side and Oskar Lange and Abba Lerner on the other. Lange and Lerner argued that a planned socialist economy could replicate a market economy. Mises and Hayek argued that it could not, because the socialist planner would not have the relevant information.
In response to the socialist-calculation debate, Leonid Hurwicz—who studied with Hayek at the London School of Economics, overlapped with Mises in Geneva, and would ultimately be awarded the Nobel Memorial Prize in 2007—developed the formal language in the 1960s and 1970s that became what we now call “mechanism design.”
Specifically, Hurwicz developed an abstract way to measure how much information a system needed. What does it mean for a system to require little information? What is the “efficient” (i.e., minimal) amount of information? Two later papers (Mount and Reiter (1974) and Jordan (1982)) used Hurwicz’s framework to prove that competitive markets are informationally efficient.
Understanding the Meaning of Informational Efficiency
How much information do people need to achieve a competitive outcome? This is where Hurwicz’s theory comes in. He gave us a formal way to discuss more and less information: the size of the message space.
To understand the message space’s size, consider an economy with six people: three buyers and three sellers. Some buyers—call them type B3—are willing to pay $3. Type B2 is willing to pay $2. Sellers of type S0 are willing to sell for $0. S1 for $1, and so on. Each buyer knows their valuation for the good, and each seller knows their cost.
Here’s the weird exercise. Along comes an oracle who knows everything. The oracle decides to figure out a competitive price that will clear the market, so he draws out the supply curve (in orange), and the demand curve (in blue) and picks an equilibrium point where they cross (in red).
So the oracle knows a price of $1.50 and a quantity of 2 is an equilibrium.
Now, we, the ignorant outsiders, come along and want to verify that the oracle is telling the truth and knows that it is an equilibrium. But we shouldn’t take the oracle’s word for it.
How can the oracle convince us that this is an equilibrium? We don’t know anyone’s valuation.
The oracle puts forward a game to the six players. The oracle says:
The price is $1.50, meaning that if you buy 1, you pay $1.50; if you sell 1, you receive $1.50.
If you say you’re B3 (which means you value the good at $3), you must buy 1.
If you say you’re B2, you must buy 1.
If you say you’re B1, you must buy 0.
If you say you’re S0, you must sell 1.
If you say you’re S1, you must sell 1.
If you say you’re S2, you must sell 0.
The oracle then asks everyone: do you accept the terms of this mechanism? Everyone says yes, because only the buyers who value it more than $1.50 buy and only the sellers with a cost less than $1.50 sell. By everyone agreeing, we (the ignorant outsiders) can verify that the oracle did, in fact, know people’s valuations.
Now, let’s count how much information the oracle needed to communicate. He needed to send a message that included the price and the trades for each type. Technically, he didn’t need to say S2 sells zero, because it is implied by the fact that the quantity bought must equal the quantity sold. In total, he needs to send six messages.
The formal exercise amounts to counting each message that needs to be sent. With a formally specified way of measuring how much information is required in competitive markets, we can now ask whether this is a lot.
If you don’t care about efficiency, you can always save on information and not say anything, don’t have anyone trade, and have a message space of size 0. That saves on information; just do nothing.
But in the context of the socialist-calculation debate, the argument was over how much information was needed to achieve “good” outcomes. Lange and Lerner argued that market socialism could be efficient, not that it would result in zero trade, so efficiency is the welfare benchmark we are aiming for.
If you restrict your attention to efficient outcomes, Mount and Reiter (1974) showed you cannot use less information than competitive markets. In a later paper, Jordan (1982) showed that there is no way to match the competitive mechanism in terms of information. The competitive mechanism is the unique mechanism with this dimension.
Acemoglu reads Hayek as saying “central planning wouldn’t work because it would be impossible to collect and compute the right allocation of resources.” But the Jordan and Mount & Reiter papers don’t claim that computation is impossible for central planners. Take whatever computational abilities exist, from the first computer to the newest AI—competitive markets always require the least information possible. Supercomputers or AI do not, and cannot, change that relative comparison.
Beyond Computational Issues
In terms of information costs, the best a central planner could hope for is to mimic exactly the market mechanism. But then, of what use is the planner? She’s just one more actor who could divert the system toward her own interest. As Acemoglu points out, “if the planner could collect all of that information, she could do lots of bad things with it.”
The incentive problem is a separate problem, which is why Hayek tried to focus solely on information. Think about building a road. There is a concern that markets will not provide roads because people would be unwilling to pay for them without being coerced through taxes. You cannot simply ask people how much they are willing to pay for the road and charge them that price. People will lie and say they do not care about roads. No amount of computing power fixes incentives. Again, computing power is tangential to the question of markets versus planning. Superior computational power doesn’t help.
There’s a lot buried in Hayek and all of those ideas are important and worth considering. They are just further complications with which we should grapple. A handful of theory papers will never solve all of our questions about the nature of markets and central planning. Instead, the formal papers tell us, in a very stylized setting, what it would even mean to quantify the “amount of information.” And once we quantify it, we have an explicit way to ask: do markets use minimal information?
For several decades, we have known that the answer is yes. In recent work, Rafael Guthmann and I show that informational efficiency can extend to big platforms coordinating buyers and sellers—what we call market-makers.
The bigger problem with Acemoglu’s suggestion that computational abilities can solve Hayek’s challenge is that Hayek wasn’t merely thinking about computation and the communication of information. Instead, Hayek was concerned about our ability to even articulate our desires. In the example above, the buyers know exactly how much they are willing to pay and sellers know exactly how much they are willing to sell for. But in the real world, people have tacit knowledge that they cannot communicate to third parties. This is especially true when we think about a dynamic world of innovation. How do you communicate to a central planner a new product?
The real issue is the market dynamics require entrepreneurs who are imagining new futures with new products like the iPhone. Major innovations will never be able to be articulated and communicated to a central planner. All of these readings of Hayek and the market’s ability to communicate information—from formal informational efficiency to tacit knowledge—are independent of computational capabilities.
Some may refer to this as the Roundup Formerly Known as the FTC Roundup. If you recorded yourself while reading out loud, and your name is Dove, that is what it sounds like when doves sigh.
Maybe He Never Said ‘Never’
The U.S. Justice Department’s (DOJ) Antitrust Division recently agreed to settle its challenge of Swedish conglomerate Assa Abloy’s proposed acquisition of the hardware and home-improvement division of Spectrum Brands.Assa Abloy will divest certain assets as a condition of settling the case and consummating the merger.
That’s of interest to those following residential-door-hardware markets—about which I know very little, although I have purchased such hardware on occasion—but it’s also of interest because Assistant Attorney General Jonathan Kanter, who heads the division, has (like Federal Trade Commission Chair Lina Khan) repeatedly decried settling merger cases. He has said he is “concerned that merger remedies short of blocking a transaction too often miss the mark” and that he believes “[o]ur goal is simple: we must be prepared to try cases to a verdict when we think a violation has taken place.”
More colorfully: “I’m here to declare that we’re not part of the chickenshit club.” À la Groucho Marx, he doesn’t want to belong to any club that will accept him as a member.
There has, at least sometimes, been a caveat: “[o]ur duty is to litigate, not settle, unless a remedy fully prevents or restrains the violation.” So maybe it was a line in the sand, but not cast in stone. Or maybe it wasn’t exactly a line.
And while I never really followed the “losing is winning” rhetoric (never uttered by a high school coach in any sport anywhere), I do understand that a tie is often preferable to a loss, and that settling can even be a win-win. Perhaps even when you (say, the DOJ, for example) basically agree to the settlement proposed by the other side.
Of Orphans and Potential Competition
All this reminds me of the “open offer” in the Illumina/Grail matter over at the FTC, whichwaspuzzledoverhere, there, and nearly everywhere. More recently, the FTC has filed suit to block Amgen’s acquisition of Horizon Therapeutics, which the commission announced with a press release bearing the headline: “FTC Sues to Block Biopharmaceutical Giant Amgen from Acquisition that Would Entrench Monopoly Drugs Used to Treat Two Serious Illnesses.”
Or, as others might call it, “if you think the complaint in Illumina/Grail was speculative, take a look at this.”
At stake are Horizon’s drugs Tepezza (used to treat thyroid eye disease) and Krystexxa (used to treat chronic refractory gout). Both are designated as “orphan drugs,” which means they treat rare conditions and enjoy various tax and regulatory benefits as a result. And as the FTC correctly notes: “[n]either of these treatments have any competition in the pharmaceutical marketplace.” That is, the patient population for each drug is fairly small, but for those who have thyroid eye disease or chronic refractory gout, there are no substitutes. Patients might well benefit from greater competition.
Given that these are currently monopoly products, the FTC cannot worry about future harm to an otherwise competitive market. Amgen has no drugs in head-to-head competition with either Tepezza or Krystexxa, and neither does any other biologics or pharmaceutical firm. And there’s no allegation of unearned market power—Tepezza and Krystexxa are approved products, and there’s no allegation that their approval or marketing has been anything other than lawful. Market power is not supposed to change with the acquisition. Certainly not on day one, or on any day soon.
Rather, there’s a concern that Amgen will (allegedly) be likely to engage in conduct that harms competition that’s expected to develop, at some time or other. The complaint alleges that Amgen will be likely to leverage its other products in such a way as to “raise… [their] rivals’ barriers to entry or dissuade them from competing as aggressively if and when they gain FDA approval.” The most likely route to this, according to the FTC complaint, would be to exploit bargaining leverage with pharmacy benefit managers (PBMs) to secure favorable placement in the formularies that PBMs design for various health plans.
Perhaps. The evidence suggests that most vertical mergers are procompetitive, but a vertically integrated firm can have an incentive to foreclose rivals, which may or may not lead to a net loss to competition and consumers, depending on the facts and circumstances.
But then there’s the “if and when” part. We don’t really know what the relevant facts and circumstances are—not from the public documents, at any rate. We are told that the Tepezza and Krystexxa monopolies will “not last forever,” but we’re not told who will enter when. There’s also no clear suggestion as to how a combined Amgen/Horizon could foreclose the development of a would-be competitor. Neither firm controls a critical input, would-be rivals’ clinical trials, or the Food and Drug Administration’s (FDA) approval process.
As for potential future competition, the large PBMs are not unsophisticated bargainers or lacking in leverage of their own. Hence, the FTC’s much-ballyhooed PBM investigations. On the one hand, there’s typically some forward-looking aspect to merger analysis: what would competition look like, but for the merger? On the other hand, as Niels Bohr and Yogi Berra have variously observed: “It is hard to make predictions, especially about the future.” Some predictions are harder than others, and some are just shots in the dark. As former FTC Commissioner Joshua Wright observed in his dissent in Nielsen Holdings, grounded…
…predictions about the evolution of a market [are] based upon a fact-intensive analysis …. when assessing whether future entry would counteract a proposed transaction’s competitive concerns, the agencies evaluate a number of facts—such as the history of entry in the relevant market and the costs a future entrant would need to incur to be able to compete effectively—to determine whether entry is “timely, likely, and sufficient.”
That was hard to do in Nielsen. It was hard to do (and the commission failed to do it) in the Meta/Within case. And it’s hard to do when we’re dealing with complex molecule products, when entry must clear significant regulatory hurdles, and when we have no clinical data establishing (or even, based on which, we might estimate) the approval and entry of any particular competing product in some specified timeframe.
Drugs in late-stage development may be far enough along in the approval process that one can reasonably predict approval and entry in a year or two. Not with any certainty, of course. Things happen. But predictions can be made with some confidence, at least when it comes to simple molecule pharmaceutical drugs (as opposed to biologics) and perhaps with drugs already approved by foreign regulators based on substantial clinical trials. But this is not that. There are potential rivals in the developmental pathway, but there seem to be zero reported results. None. That is, none reported by the FDA, where it reports such things and none mentioned in the FTC’s complaint. So we seem to lack the sort of data that might facilitate a reasonable prediction about the particulars of future entry, should it occur.
Nobody is poised to enter the market and there is no clear near-term entrant, but for one. As the complaint explains:
Horizon is currently developing a subcutaneously administered version of Tepezza, which it estimates will receive FDA approval. … The planned introduction of this subcutaneous Tepezza formulation promises to further lower Amgen’s logistical and economic barriers to establishing multi-product contracts between its pharmacy benefit products, like Enbrel, and Tepezza.
Perhaps, but surely that’s a double-edged sword for the FTC’s complaint, at best. Amgen’s stock of blockbusters—the alleged source of their leverage, should push come to shove—would not be affected. And there’s no reason to think (and no allegation) that Amgen would not continue the development of a new form of delivery for Tepezza.
The complaint maintains that “[t]here are no countervailing factors sufficient to offset the likelihood of competitive harm from the Proposed Acquisition.” But we have no idea how to estimate the risk that’s supposed to be offset. Certainly, the complaint doesn’t tell us and the complaint itself hinted at potentially offsetting factors in the very same paragraph: research, development, and marketing efficiencies, as well as the possibility of lower regulatory costs, courtesy of Amgen’s pockets, sophistication, and experience. If the subcutaneous Tepezza product could be brought to market sooner, and/or marketed more effectively, consumers wouldn’t be harmed. They would benefit.
It seems we really have no idea what future competition might or might not look like two or three years down the road, or four or five. Indeed, it’s not clear when or whether a rival to either drug will be approved for marketing in the United States, whether Amgen (or Horizon) attempts to erect barriers to entry or not. Moreover, there’s no obvious route by which Amgen can impede the development of rival products. Is the FTC estimating a risk of harm to competition or guessing?
Statisticians (and economists) distinguish between Type 1 and Type 2 errors, false positives and false negatives respectively. There’s ongoing debate over the question whether the current state of the law pays too much attention to the risk of false positives, and not enough to the risk of false negatives. Be that as it may, there are very real costs when procompetitive mergers are wrongly identified as anticompetitive and blocked accordingly.
The perfect no-false-negatives strategy of “block all mergers” (or all where there’s a non-zero risk of competitive harm) cannot be adopted for free. That ought to be plain in the case of drug development (and, say, the type of cancer tests at issue in Illumina/Grail). The population of consumers comprises patients and payers; delay the benefits of efficient mergers, and patients are harmed. A complaint is just that, but does the FTC’s complaint show that harm is likely on any particular time frame, or simply possible at some point?
Looking back at the past 25 years, one might view the FTC’s attention to mergers in the health-care sector as a model of research-based enforcement, with important contributions from the Bureau of Economics and the policy shop, in addition to those of enforcers in the Bureau of Competition. That was a nice view; I miss it.
Brexit was supposed to free the United Kingdom from Brussels’ heavy-handed regulation and red tape. But dreams of a Singapore-on-the-Thames are slowly giving way to ill-considered regulation that threatens to erode Britain’s position as one of the world’s leading tech hubs.
The UK Competition and Markets Authority’s recent decision to block the merger of Microsoft and game-maker Activision-Blizzard offers a case in point. Less than a month after the CMA formally announced its opposition to the deal, the European Commission has thrown a spanner in the works. Looking at the same facts, the commission—no paragon of free-market thinking—concluded the merger would benefit competition and consumers, paving the way for it to move ahead in the Old Continent.
The two regulators disagree on the likely effects of Microsoft’s acquisition. The European Commission surmised that bringing Activision-Blizzard titles to Microsoft’s Xbox will create tougher competition for Sony, leading to lower prices and better games (conditional on several remedies). This makes sense. Sony’s PlayStation 5 is by far the market leader, currently outselling the Xbox four to one. Closing the content gap between these consoles will make the industry more competitive.
In contrast, the CMA’s refusal hinged on hypothetical concerns about the embryonic cloud-gaming market, which is estimated to be worth £2 billion worldwide, compared to £40 billion for console gaming. The CMA feared that, despite proposed temporary remedies, Microsoft would overthrow rivals by eventually making Activision-Blizzard titles exclusive to its cloud platform.
Unfortunately, this narrow focus on cloud gaming at the expense of the console market essentially amounts to choosing a bird in the bush instead of two in the hand. Worse, it highlights the shortcomings of the UK’s current approach to economic regulation.
Even if the CMA was correct on the substance of the case—and there are strong reasons to believe it is not—its decision would still be harmful to the UK economy. For one thing, this tough stance may cause two of the world’s leading tech firms to move thousands of jobs away from the UK. More fundamentally, foreign companies and startup founders will not want to tie themselves to a jurisdiction whose regulatory authorities show such disdain for the firms they host.
Given what we have already seen from the CMA, it would appear ill-advised to further increase the authority’s powers and reduce judicial oversight of its decisions. Yet that is precisely what the pending Digital Markets, Competition and Consumers Bill would do.
The bill would give the CMA vast authority to shape firms operating in “digital markets” according to its whims. It would cover almost any digital service offered by a firm whose turnover exceeds certain thresholds. And just like the CMA’s merger-review powers, these new rules would be subject to only limited judiciary oversight—judicial review rather than merits-based appeals.
The power to shape the internet in the UK (and, indirectly, abroad) would thus be entrusted to a regulator that fails to grasp that hypothetical and remediable concerns in one tiny market (cloud gaming) are no reason to block a transaction that has vast countervailing benefits in another (console gaming).
In turn, this threatens to deter startup creation in the UK. Firms will invest abroad if choosing the UK makes them vulnerable to the whims of an overzealous regulator, which would be the case under the digital markets bill. This could mean fewer tech jobs in the UK, as well as the erosion of London’s status as one of the world’s leading tech hubs.
The UK is arguably at the forefront of technologies like artificial intelligence and nuclear fusion. A tough merger-control policy that signals to startup founders that they will be barred from selling their companies to larger firms could have a disastrous impact on the UK’s competitiveness in those fields.
The upshot is that, when it comes to economic regulation, the United Kingdom is not an island. It cannot stand alone in a globalized world, where tech firms, startup founders, and VCs choose the jurisdictions that are most accommodating and that maximize the chance their businesses will thrive.
With Brexit now complete, the UK is free to replace legacy Brussels red tape with light-touch rules that attract foreign firms and venture capital investments. Yet the UK seems to be replicating many of Brussels’ shortcomings. Fortunately, there is still time for Parliament to change course on the digital markets bill.
The United Kingdom’s 2016 “Brexit” decision to leave the European Union created the opportunity for the elimination of unwarranted and excessive EU regulations that had constrained UK economic growth and efficiency.
Recognizing that fact, former Prime Minister Boris Johnson launched the Task Force on Innovation, Growth, and Regulatory Reform, whose May 2021 report recommended “a new regulatory vision for the UK.” That vision emphasized “[p]romot[ing] productivity, competition and innovation through a new framework of proportionate, agile and less bureaucratic regulation.”
Despite it containing numerous specific reform proposals, relatively little happened in the immediate wake of the report. Last week, however, the UK Department for Business and Trade announced an initial package of regulatory reforms intended to “reduce unnecessary regulation for businesses, cutting costs and allowing them to compete.” The initial package is focused on:
“reducing the business burden”;
“[e]nsuring regulation is, by default, the last rather than first response of Government”;
“[i]mproving regulators’ focus on economic growth by ensuring regulatory action is taken only when it is needed”;
“[p]romoting competition and productivity in the workplace”; and
“[s]timulating innovation, investment and growth by announcing two strategic policy statements to steer our regulators.”
As we explain in a May 15 piece published by CapX, while this latest development holds some real promise, a bit of caution is in order:
For too long the UK’s approach to regulation has been warped by a strange kind of numbers game: how many laws can be removed? What percentage of EU laws on the UK rule book can be dispensed with? how many quangos can go on the bonfire?
It’s the kind of misguided approach that has led to headline-grabbing projects like the revival of imperial measures – a purely symbolic gesture that did nothing to improve competition, liberalise the economy or raise people’s living standards.
Rather than this rather performative approach, our new book Trade, Competition and Domestic Regulatory Policy suggests a very different approach to regulatory reform.
First, does the proposed reform establish a framework that can be used to ensure that future regulation is as pro-competitive as possible. Are actual mechanisms established or are the principles merely hortatory?
Second, how does the reform impact the stock of existing regulation? How precisely will those regulations be made more proportionate, subject to the test of necessity, and generate pro-competitive and open trade outcomes?
Third, is there a moral philosophical choice embedded in the approach? This will be vital to ensuring that reform is not some random hotch-potch of ideas, designed more for a tabloid front page than as a real, sustainable and concrete reform.
Encouragingly, if we look through these lenses in turn, we find that the beginnings of a framework are emerging here in the UK.
The Government’s recent package of regulatory reform has much to commend it. It establishes an overall set of governing principles for future regulation, and also requires the review of our existing stock of regulation, including the body of EU rules that are still part of UK law. The focus on necessity, proportionality and competition is particularly welcome, as is the consideration of how regulation affects economic growth.
It’s not perfect – we do think, for instance, that the framework could go farther and actually embed the Competition and Markets Authority into the regulatory promulgation process more concretely. This should not be controversial. The OECD itself made these recommendations in its Regulatory Toolkit and Competition Assessment some 20 years ago, which was coincidentally the time when the spread of regulatory distortions seemed to accelerate. The International Competition Network (ICN), comprised of most national competition agencies, has also recommended that those agencies advocate for competition in the regulatory promulgation process.
The UK has indicated that they would apply this approach to the stock of regulation, much of which is retained EU law. This represents an opportunity for the UK, as most countries do not have a readily identifiable corpus of regulation to start with. Certainly it is helpful to ensure that common law approaches are applied to the entire UK rule book (including any retained EU law), and that UK interpretation (by judges, and the executive branch) trumps any interpretation of the Court of Justice of the European Union. Of course, it would have been better to have undertaken this task six years ago, when we knew it would be necessary.
Where there is less clarity, it is around the philosophic underpinnings of this regulatory approach, which is regrettable. Back in the early 2000s, the OECD recognised the long-held view that pro-competitive regulation does indeed stimulate an increase in GDP per capita. Separately, this has also been recognised for open trading systems and property rights protection.
None of this should be remotely controversial in the UK, or indeed anywhere else. It is unfortunate that it has become so, largely because of an approach based on a Manichaean view that all EU regulations are bad, and all UK regulations are good, and that success is to be judged on the number of EU rules removed.
More generally, all other nations would also benefit from systematic regulatory reform that aims to ferret out the many anticompetitive market distortions that severely limit economic growth and welfare enhancement. We discuss this topic at length in our recent book on Trade, Competition, and Domestic Regulatory Policy.
Legislation to secure children’s safety online is all the rage right now, not only on Capitol Hill, but in state legislatures across the country. One of the favored approaches is to impose on platforms a duty of care to protect teen users.
For example, Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) have reintroduced the Kid’s Online Safety Act (KOSA), which would require that social-media platforms “prevent or mitigate” a variety of potential harms, including mental-health harms; addiction; online bullying and harassment; sexual exploitation and abuse; promotion of narcotics, tobacco, gambling, or alcohol; and predatory, unfair, or deceptive business practices.
But while bills of this sort would define legal responsibilities that online platforms have to their minor users, this statutory duty of care is more likely to result in the exclusion of teens from online spaces than to promote better care of teens who use them.
Drawing on the previous research that I and my International Center for Law & Economics (ICLE) colleagues have done on the economics of intermediary liability and First Amendment jurisprudence, I will in this post consider the potential costs and benefits of imposing a statutory duty of care similar to that proposed by KOSA.
The Law & Economics of Online Intermediary Liability and the First Amendment (Kids Edition)
Previously (in a law review article, an amicus brief, and a blog post), we at ICLE have argued that there are times when the law rightfully places responsibility on intermediaries to monitor and control what happens on their platforms. From an economic point of view, it makes sense to impose liability on intermediaries when they are the least-cost avoider: i.e., the party that is best positioned to limit harm, even if they aren’t the party committing the harm.
On the other hand, as we have also noted, there are costs to imposing intermediary liability. This is especially true for online platforms with user-generated content. Specifically, there is a risk of “collateral censorship” wherein online platforms remove more speech than is necessary in order to avoid potential liability. For example, imposing a duty of care to “protect” minors, in particular, could result in online platforms limiting teens’ access.
If the social costs that arise from the imposition of intermediary liability are greater than the benefits accrued, then such an arrangement would be welfare-destroying, on net. While we want to deter harmful (illegal) content, we don’t want to do so if we end up deterring access to too much beneficial (legal) content as a result.
The First Amendment often limits otherwise generally applicable laws, on grounds that they impose burdens on speech. From an economic point of view, this could be seen as an implicit subsidy. That subsidy may be justifiable, because information is a public good that would otherwise be underproduced. As Daniel A. Farber put it in 1991:
[B]ecause information is a public good, it is likely to be undervalued by both the market and the political system. Individuals have an incentive to ‘free ride’ because they can enjoy the benefits of public goods without helping to produce those goods. Consequently, neither market demand nor political incentives fully capture the social value of public goods such as information. Our polity responds to this undervaluation of information by providing special constitutional protection for information-related activities. This simple insight explains a surprising amount of First Amendment doctrine.
In particular, the First Amendment provides important limits on how far the law can go in imposing intermediary liability that would chill speech, including when dealing with potential harms to teenage users. These limitations seek the same balance that the economics of intermediary liability would suggest: how to hold online platforms liable for legally cognizable harms without restricting access to too much beneficial content. Below is a summary of some of those relevant limitations.
Speech vs. Conduct
The First Amendment differentiates between speech and conduct. While the line between the two can be messy (and “expressive conduct” has its own standard under the O’Brien test), governmental regulation of some speech acts is permissible. Thus, harassment, terroristic threats, fighting words, and even incitement to violence can be punished by law. On the other hand, the First Amendment does not generally allow the government to regulate “hate speech” or “bullying.” As the 3rd U.S. Circuit Court of Appeals explained it in the context of a school’s anti-harassment policy:
There is of course no question that non-expressive, physically harassing conduct is entirely outside the ambit of the free speech clause. But there is also no question that the free speech clause protects a wide variety of speech that listeners may consider deeply offensive, including statements that impugn another’s race or national origin or that denigrate religious beliefs… When laws against harassment attempt to regulate oral or written expression on such topics, however detestable the views expressed may be, we cannot turn a blind eye to the First Amendment implications.
In other words, while a duty of care could reach harrassing conduct, it is unclear how it could reach pure expression on online platforms without implicating the First Amendment.
Impermissibly Vague
The First Amendment also disallows rules sufficiently vague that they would preclude a person of ordinary intelligence from having fair notice of what is prohibited. For instance, in an order handed down earlier this year in Høeg v. Newsom, the federal district court granted the plaintiffs’ motion to enjoin a California law that would charge medical doctors with sanctionable “unprofessional conduct” if, as part of treatment or advice, they shared with patients “false information that is contradicted by contemporaneous scientific consensus contrary to the standard of care.”
The court found that “contemporary scientific consensus” was so “ill-defined [that] physician plaintiffs are unable to determine if their intended conduct contradicts [it].” The court asked a series of questions relevant to trying to define the phrase:
[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?
Thus, any duty of care to limit access to potentially harmful online content must not be defined in a way that is too vague for a person of ordinary intelligence to know what is prohibited.
Liability for Third-Party Speech
The First Amendment limits intermediary liability when dealing with third-party speech. For the purposes of defamation law, the traditional continuum of liability was from publishers to distributors (or secondary publishers) to conduits. Publishers—such as newspapers, book publishers, and television producers—exercised significant editorial control over content. As a result, they could be held liable for defamatory material, because it was seen as their own speech. Conduits—like the telephone company—were on the other end of the spectrum, and could not be held liable for the speech of those who used their services.
As the Court of Appeals of the State of New York put in a 1974 opinion:
In order to be deemed to have published a libel a defendant must have had a direct hand in disseminating the material whether authored by another, or not. We would limit [liability] to media of communications involving the editorial or at least participatory function (newspapers, magazines, radio, television and telegraph)… The telephone company is not part of the “media” which puts forth information after processing it in one way or another. The telephone company is a public utility which is bound to make its equipment available to the public for any legal use to which it can be put…
Distributors—which included booksellers and libraries—were in the middle of this continuum. They had to have some notice that content they distributed was defamatory before they could be held liable.
Courts have long explored the tradeoffs between liability and carriage of third-party speech in this context. For instance, in Smith v. California, the U.S. Supreme Court found that a statute establishing strict liability for selling obscene materials violated the First Amendment because:
By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” (internal citations omitted)
It’s also worth noting that traditional publisher liability was limited in the case of republication, such as when newspapers republished stories from wire services like the Associated Press. Courts observed the economic costs that would attend imposing a strict-liability standard in such cases:
No newspaper could afford to warrant the absolute authenticity of every item of its news’, nor assume in advance the burden of specially verifying every item of news reported to it by established news gathering agencies, and continue to discharge with efficiency and promptness the demands of modern necessity for prompt publication, if publication is to be had at all.
Over time, the rule was extended, either by common law or statute, from newspapers to radio and television broadcasts, with the treatment of republication of third-party speech eventually resembling conduit liability even more than distributor liability. See Brent Skorup and Jennifer Huddleston’s “The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation” for a more thoroughgoing treatment of the topic.
The thing that pushed the law toward conduit liability when entities carried third-party speech was the implicit economic reasoning. For example, in 1959’s Farmers Educational & Cooperative Union v. WDAY, Inc., the Supreme Court held that a broadcaster could not be found liable for defamation made by a political candidate on the air, arguing that:
The decision a broadcasting station would have to make in censoring libelous discussion by a candidate is far from easy. Whether a statement is defamatory is rarely clear. Whether such a statement is actionably libelous is an even more complex question, involving as it does, consideration of various legal defenses such as “truth” and the privilege of fair comment. Such issues have always troubled courts… if a station were held responsible for the broadcast of libelous material, all remarks evenly faintly objectionable would be excluded out of an excess of caution. Moreover, if any censorship were permissible, a station so inclined could intentionally inhibit a candidate’s legitimate presentation under the guise of lawful censorship of libelous matter. Because of the time limitation inherent in a political campaign, erroneous decisions by a station could not be corrected by the courts promptly enough to permit the candidate to bring improperly excluded matter before the public. It follows from all this that allowing censorship, even of the attenuated type advocated here, would almost inevitably force a candidate to avoid controversial issues during political debates over radio and television, and hence restrict the coverage of consideration relevant to intelligent political decision.
It is clear from the foregoing that imposing duty of care on online platforms to limit speech in ways that would make them strictly liable would be inconsistent with distributor liability. But even a duty of care that more resembled a negligence-based standard could implicate speech interests if online platforms are seen to be akin to newspapers, or to radio and television broadcasters, when they act as republishers of third-party speech. Such cases would appear to require conduit liability.
The First Amendment Applies to Children
The First Amendment has been found to limit what governments can do in the name of protecting children from encountering potentially harmful speech. For example, California in 2005 passed a law prohibiting the sale or rental of “violent video games” to minors. In Brown v. Entertainment Merchants Ass’n, the Supreme Court found the law unconstitutional, finding that:
No doubt [the government] possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” (internal citations omitted)
The Court did not find it persuasive that the video games were violent (noting that children’s books often depict violence) or that they were interactive (as some children’s books offer choose-your-own-adventure options). In other words, there was nothing special about violent video games that would subject them to a lower level of constitutional protection, even for minors that wished to play them.
The Court also did not find persuasive California’s appeal that the law aided parents in making decisions about what their children could access, stating:
California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.
Justice Samuel Alito’s concurrence in Brown would have found the California law unconstitutionally vague, arguing that constitutional speech would be chilled as a result of the law’s enforcement. The fact its intent was to protect minors didn’t change that analysis.
Limiting the availability of speech to minors in the online world is subject to the same analysis as in the offline world. In Reno v. ACLU, the Supreme Court made clear that the First Amendment applies with equal effect online, stating that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” In Packingham v. North Carolina, the Court went so far as to call social-media platforms “the modern public square.”
Restricting minors’ access to online platforms through age-verification requirements already have been found to violate the First Amendment. In Ashcroft v. ACLU (II), the Supreme Court reviewed provisions of the Children Online Protection Act’s (COPA) that would restrict posting content “harmful to minors” for “commercial purposes.” COPA allowed an affirmative defense if the online platform restricted access by minors through various age-verification devices. The Court found that “[b]locking and filtering software is an alternative that is less restrictive than COPA, and, in addition, likely more effective as a means of restricting children’s access to materials harmful to them” and upheld a preliminary injunction against the law, pending further review of its constitutionality.
On remand, the 3rd Circuit found that “[t]he Supreme Court has disapproved of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech, because such restrictions can have an impermissible chilling effect on those would-be recipients.” The circuit court would eventually uphold the district court’s finding of unconstitutionality and permanently enjoin the statute’s provisions, noting that the age-verification requirements “would deter users from visiting implicated Web sites” and therefore “would chill protected speech.”
A duty of care to protect minors could be unconstitutional if it ends up limiting access to speech that is not illegal for them to access. Age-verification requirements that would likely accompany such a duty could also result in a statute being found unconstitutional.
In sum:
A duty of care to prevent or mitigate harassment and bullying has First Amendment implications if it regulates pure expression, such as speech on online platforms.
A duty of care to limit access to potentially harmful online speech can’t be defined so vaguely that a person of ordinary intelligence can’t know what is prohibited.
A duty of care that establishes a strict-liability standard on online speech platforms would likely be unconstitutional for its chilling effects on legal speech. A duty of care that establishes a negligence standard could similarly lead to “collateral censorship” of third-party speech.
A duty of care to protect minors could be unconstitutional if it limits access to legal speech. De facto age-verification requirements could also be found unconstitutional.
The Problems with KOSA: The First Amendment and Limiting Kids’ Access to Online Speech
KOSA would establish a duty of care for covered online platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:
Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
Patterns of use that indicate or encourage addiction-like behaviors.
Physical violence, online bullying, and harassment of the minor.
Sexual exploitation and abuse.
Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
Predatory, unfair, or deceptive marketing practices, or other financial harms.
There are also a variety of tools and notices that must be made available to users under age 17, as well as to their parents.
Reno and Age Verification
KOSA could be found unconstitutional under the Reno and COPA-line of cases for creating a de facto age-verification requirement. The bill’s drafters appear to be aware of the legal problems that an age-verification requirement would entail. KOSA therefore states that:
Nothing in this Act shall be construed to require—(1) the affirmative collection of any personal data with respect to the age of users that a covered platform is not already collecting in the normal course of business; or (2) a covered platform to implement an age gating or age verification functionality.
But this doesn’t change the fact that, in order to effectuate KOSA’s requirements, online platforms would have to know their users’ ages. KOSA’s duty of care incorporates a constructive-knowledge requirement (i.e., “reasonably should know is a minor”). A duty of care combined with the mandated notices and tools that must be made available to minors makes it “reasonable” that platforms would have to verify the age of each user.
If a court were to agree that KOSA doesn’t require age gating or age verification, this would likely render the act ineffective. As it stands, most of the online platforms that would be covered by KOSA only ask users their age (or birthdate) upon creation of a profile, which is easily evaded by simple lying. While those under age 17 (but at least age 13) at the time of the act’s passage who have already created profiles would be implicated, it would appear the act wouldn’t require platforms to vet whether users who said they were at least 17 when they created new profiles were actually telling the truth.
Vagueness and Protected Speech
Even if KOSA were not found unconstitutional for creating an explicit age-verification scheme, it still likely would lead to kids under 17 being restricted from accessing protected speech. Several of the types of speech the duty of care covers could include legal speech. Moreover, the prohibited speech is defined so vaguely that it likely would lead to chilling effects on access to legal speech.
For example, pictures of photoshopped models are protected speech. If teenage girls want to see such content on their feeds, it isn’t clear that the law can constitutionally stop them, even if it’s done by creating a duty of care to prevent and mitigate harms associated with “anxiety, depression, or eating disorders.”
Moreover, access to content that kids really like to see or hear is still speech, even if they like it so much that an outside observer may think they are addicted to it. Much as the Court said in Brown, the government does not have “a free-floating power to restrict [speech] to which children may be exposed.”
KOSA’s Section 3(A)(1) and 3(A)(2) would also run into problems, as they are so vague that a person of ordinary intelligence would not know what they prohibit. As a result, there would likely be chilling effects on legal speech.
Much like in Høeg, the phrase “consistent with evidence-informed medical information” leads to various questions regarding how an online platform could comply with the law. For instance, it isn’t clear what content or design issue would be implicated by this subsection. Would a platform need to hire mental-health professionals to consult with them on every product-design and content-moderation decision?
Even worse is the requirement to prevent and mitigate “patterns of use that indicate or encourage addiction-like behaviors,” which isn’t defined by reference to “evidence-informed medical information” or to anything else.
Even Bullying May Be Protected Speech
Even KOSA’s duty to prevent and mitigate “physical violence, online bullying, and harassment of the minor” in Section 3(3) could implicate the First Amendment. While physical violence would clearly be outside of the First Amendment’s protections (although it’s unclear how an online platform could prevent or mitigate such violence), online bullying and harassing speech are, nonetheless, speech. As a result, this duty of care could receive constitutional scrutiny regarding whether it effectively limits lawful (though awful) speech directed at minors.
Locking Children Out of Online Spaces
KOSA’s duty of care appears to be based on negligence, in that it requires platforms to take “reasonable measures.” This probably makes it more likely to survive First Amendment scrutiny than a strict-liability regime would.
It could, however, still result in real (and costly) product-design and moderation challenges for online platforms. As a result, there would be significant incentives for those platforms to exclude those they know or reasonably believe are under age 17 from the platforms altogether.
While this is not strictly a First Amendment problem, per se, it nonetheless illustrates how laws intended to “protect” children’s safety while online can actually lead to their being restricted from using online speech platforms altogether.
Conclusion
Despite its being christened the “Kid’s Online Safety Act,” KOSA will result in real harm for kids if enacted into law. Its likely result would be considerable “collateral censorship,” as online platforms restrict teens’ access in order to avoid liability.
The bill’s duty of care would also either require likely unconstitutional age-verification, or it will be rendered ineffective, as teen users lie about their age in order to access desired content.
Congress shall make no law abridging the freedom of speech, even if it is done in the name of children.
[The following is a guest post from Neil Chilson, a senior research fellow with the Center for Growth and Opportunity at Utah State University and former chief technologist of the Federal Trade Commission.]
The Federal Trade Commission (FTC) last week held its first informal hearing in 20 years on Section 18 rulemaking. The hearing itself had a technical delay, which to us participants felt like another 20 years, but was a mere two hours or so.
At issue is a proposed rule intended to target impersonation fraud. Impersonation fraudsters hold themselves out as government officials or company representatives in order to defraud unsuspecting consumers.
I was one of 13 individuals who requested to speak at the informal hearing. My interest is as a consumer with a stake in efficient and effective fraud enforcement and as a former FTC employee proud of the anti-fraud work I contributed to. What follows is adapted from my remarks.
Imposter Fraud Deserves a Good Rule
As the record clearly shows, imposter fraud is a too-common occurrence and costs consumers and businesses millions of dollars a year. We need a good rule here—one that effectively targets fraud with minimal impact on lawful behavior and that it is legally sustainable.
To that end, two points. First, the rule as written, unlike every other Section 18 rule, is broader than Section 5 and ought to be narrowed. Second, the FTC caselaw is indefinite on the contours of means and instrumentalities. The record shows that this provision is already being misunderstood. The FTC should correct this misperception.
Together, these issues mean that this proceeding has likely failed to put potentially affected parties on notice, leaving a factual gap in the record and in the agency’s regulatory impact analysis.
The Text of the Rule Is Overly Broad
This proceeding is targeted at addressing impersonation frauds and scams in commerce—acts that clearly violate Section 5.
Yet the rule as written declares unlawful activities that would not violate Section 5’s prohibition on deceptive acts or practices. The rule does not reference “unfairness” or “deception” or note that prohibited activities must be in commerce.
On its face, the draft rule would prohibit a comedian from impersonating Elon Musk; John Ratzenberger from portraying a mailman; or a kid from dressing up as Abraham Lincoln. With the means and instrumentalities provision, it would appear to be “unlawful” to even provide an Abraham Lincoln costume to said child.
Of course, courts would not permit such overbroad applications of the rule. And it seems unlikely that this FTC would spend its resources pursuing cases that the courts would reject out of hand. But rules should be written assuming that some future leadership might seek to abuse them, perhaps to chill unflattering portrayals of national politicians.
The notice of proposed rulemaking (NPRM) states that Section 5 hems in the broad language of the rule. But that gets the purpose of FTC rulemaking backward. The text of the rule should clearly delimit a subset of practices prohibited by Section 5, not the other way around. Indeed, everyone of thesixpastrules created through Section 18 has been written as a subset of Section 5. Every one of them specifies in text that the prohibited conduct is “in commerce.” Each one also describes the prohibited conduct as either an “unfair act or practice” or a “deceptive act or practice” or both.
It is a deceptive act or practice for any used vehicle dealer, when that dealer sells or offers for sale a used vehicle in or affecting commerce as commerce is defined in the Federal Trade Commission Act … to misrepresent the mechanical condition of a used vehicle…
Adding similar language to the draft impersonation rule would be simple and would still achieve the goals of the proceeding. And it would better match the text of the rule to the NPRM’s description of the rule’s scope, helping to cure some of the notice concerns.
Means and Instrumentalities
The second matter is the “means and instrumentalities” provision. I echo the value of having a knowledge requirement. As former FTC Bureau of Consumer Protection (BCP) Director Jessica Rich has noted, there has been debate over the years about the contours of means and instrumentalities, with some commissioners saying that others are using it as a substitute for “aiding and abetting,” a form of secondary liability not within Section 5.
Indeed, some parties in this record have made this mistake. The FTC must clearly articulate the proper scope of the rule, potentially by putting the standard for means and instrumentalities in the rule itself.
To the extent the standard for applying means and instrumentalities liability under Section 5 is itself unclear, it is not a good candidate for rulemaking.
As the U.S. House Energy and Commerce Subcommittee on Oversight and Investigations convenes this morning for a hearing on overseeing federal funds for broadband deployment, it bears mention that one of the largest U.S. broadband-subsidy programs is actually likely run out of money within the next year. Writing in Forbes, Roslyn Layton observes of the Affordable Connectivity Program (ACP) that it has enrolled more than 14 million households, concluding that it “may be the most effective broadband benefit program to date with its direct to consumer model.”
This may be true, but how should we measure effectiveness? One seemingly simple measure would be the number of households with at-home internet access who would not have it but for the ACP’s subsidies. Those households can be broadly divided into two groups:
Households that signed up for ACP and got at-home internet; and
Households that have at-home internet, but wouldn’t if they didn’t receive the ACP subsidies.
Conceptually, evaluating the first group is straightforward. We can survey ACP subscribers and determine whether they had internet access before receiving the ACP subsidies. The second group is much more difficult, if not impossible, to measure with the available information. We can only guess as to how many households would unsubscribe if the subsidies went away.
To give a bit of background on the program we now call the ACP: broadband has been included since 2016 as a supported service under the Federal Communication Commission’s (FCC) Lifeline program. Among the Lifeline program’s goals are to ensure the availability of broadband for low-income households (to close the so-called “digital divide”) and to minimize the Universal Service Fund contribution burden levied on consumers and businesses.
As part of the appropriations act enacted in 2021 in response to the COVID-19 pandemic, Congress created a temporary $3.2 billion Emergency Broadband Benefit (EBB) program within the Lifeline program. EBB provided eligible households with a $50 monthly discount on qualifying broadband service or bundled voice-broadband packages purchased from participating providers, as well as a one-time discount of up to $100 for the purchase of a device (computer or tablet). The EBB program was originally set to expire when the funds were depleted, or six months after the U.S. Department of Health and Human Services (HHS) declared an end to the pandemic.
With passage of the Infrastructure Investment and Jobs Act (IIJA) in November 2021, the EBB’s temporary subsidy was extended indefinitely and renamed the Affordable Connectivity Program, or ACP. The IIJA allocated an additional $14 billion to provide subsidies of $30 a month to eligible households. Without additional appropriations, the ACP is expected to run out of funding by early 2024.
The Case of the Nonadopters
According to Information Technology & Innovation Foundation (ITIF), 97.6% of the U.S. population has access to a fixed connection of at least 25/3 Mbps through asymmetric digital subscriber line (ADSL), cable, fiber, or fixed wireless. Pew Research reports that 93% of its survey respondents indicated they have a broadband connection at home.
Pew’s results are in-line with U.S. Census estimates from the American Community Survey. The figure below, summarizing information from 2021, shows that 92.6% of households had a broadband subscription or had access without having to pay for a subscription. Assuming ITIF’s estimates of broadband availability are accurate, then among households without broadband, approximately two-thirds of them—6.4 million—have access to broadband.
On the one hand, price is obviously a major factor driving adoption. For example, among the 7.4% of households who do not use the internet at home, Census surveys show about one-third indicate that price is one reason for not having an at-home connection, responding that they “can’t afford it” or that it’s “not worth the cost.” On the other hand, more than half of respondents said they “don’t need it” or are “not interested.”
But George Ford argues that these responses to the Census surveys are unhelpful in evaluating the importance of price relative to other factors. For example, if a consumer says broadband is “not worth the cost,” it’s not clear whether the “worth” is too low or the “cost” is too high. Consumers who are “not interested” in subscribing to an internet service are implicitly saying that they are not interested at current prices. In other words, there may be a price that is sufficiently low that uninterested consumers become interested.
But in some cases, that price may be zero—or even negative.
A 2022 National Telecommunications and Information Administration (NTIA) survey of internet use found that about 75% of offline households said they wanted to pay nothing for internet access. In addition, as shown in the figure above, about a quarter of households without a broadband or smartphone subscription claim that they can access the internet at home without paying for a subscription. Thus, there may be a substantial share of nonadopters who would not adopt even if the service were free to the consumer.
Aside from surveys, another way to evaluate the importance of price in internet-adoption decisions is with empirical estimates of demand elasticity. The price elasticity of demand is the percent change in the quantity demanded for a good, divided by the percent change in price. A demand curve with an elasticity between 0 and –1 is said to be inelastic, meaning the change in the quantity demanded is relatively less responsive to changes in price. An elasticity of less than –1 is said to be elastic, meaning the change in the quantity demanded is relatively more responsive to changes in price.
Michael Williams and Wei Zao’s survey of the research on the price elasticity of demand concludes that demand for internet services has traditionally been inelastic and has “become increasingly so over time.” They report a 2019 elasticity of –0.05 (down from –0.69 in 2008). George Ford’s 2021 study estimates an elasticity ranging from –0.58 to –0.33. These results indicate that a subsidy program that reduced the price of internet services by 10% would increase adoption by anywhere from 0.5% (i.e., one-half of one percent) to 5.8%. In other words, a range from approximately zero to a small but significant increase.
It is unsurprising that the demand for internet services is so inelastic, especially among those who do not subscribe to broadband or smartphone service. One reason is the nature of demand curves. Generally speaking, as quantity demanded increases (i.e., moves downward along the demand curve), the demand curve becomes less elastic, as shown in the figure below (which is an illustration of a hypothetical demand curve). With adoption currently at more than 90% of households, the remaining nonadopters are much less likely to adopt at any price.
Thus, there is a possibility that the ACP may be so successful that the program has hit a point of significant diminishing marginal returns. Now that nearly 95% of U.S. households with access to at-home internet use at-home Internet, it may be very difficult and costly to convert the remaining 5% of nonadopters. For example, if Williams & Zao’s estimate of a price elasticity of –0.05 is correct, then even a subsidy that provided “free” Internet would convert only half of the 5% of nonadopters.
Keeping the Country Connected
With all of this in mind, it’s important to recognize that the primary metric for success should not be solely based on adoption rates.
The ACP is not an attempt to create a perfect government program, but rather to address the imperfect realities we face. Some individuals may never adopt internet services, just as some never installed home-telephone services. Even at the peak of landline use in 1998, only 96.2% of households had one.
On the other hand, those who value broadband access may be forced to discontinue service if faced with financial difficulties. Therefore, the program’s objective should encompass both connecting new users and ensuring that economically vulnerable individuals maintain access.
Instead of pursuing an ideal regulatory or subsidy program, we should focus on making the most informed decisions in a context where information is limited. We know there is general demand for internet access and that a significant number of households might discontinue services during economic downturns. And we also know that, in light of these realities, numerous stakeholders advocate for invasive interventions in the broadband market, potentially jeopardizing private investment incentives.
Thus, even if the ACP program is not perfect in itself, it goes a long way toward satisfying the need to make sure the least well-off stay connected, while also allowing private providers to continue their track record of providing high-speed, affordable broadband.
And although we do not have data at the moment demonstrating exactly how many households would discontinue internet service in the absence of subsidies, if Congress does not appropriate additional ACP funds, we may soon have an unfortunate natural experiment that helps us to find out.
The Federal Trade Commission (FTC) might soon be charging rent to Meta Inc. The commission earlier this week issued (bear with me) an “Order to Show Cause why the Commission should not modify its Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (July 27, 2012), as modified by Order Modifying Prior Decision and Order, In the Matter of Facebook, Inc., Docket No. C-4365 (Apr. 27, 2020).”
It’s an odd one (I’ll get to that) and the third distinct Meta matter for the FTC in 2023.
Recall that the FTC and Meta faced off in federal court earlier this year, as the commission sought a preliminary injunction to block the company’s acquisition of virtual-reality studio Within Unlimited. As I wrote in a prior post, U.S. District Court Judge Edward J. Davila denied the FTC’s request in late January. Davila’s order was about more than just the injunction: it was predicated on the finding that the FTC was not likely to prevail in its antitrust case. That was not entirely surprising outside FTC HQ (perhaps not inside either), as I was but one in a longline of observers who had found the FTC’s case to be weak.
No matter for the not-yet-proposed FTC Bureau of Let’s-Sue-Meta, as there’s another FTC antitrust matter pending: the commission also seeks to unwind Facebook’s 2012 acquisition of Instagram and its 2014 acquisition of WhatsApp, even though the FTC reviewed both mergers at the time and allowed them to proceed. Apparently, antitrust apples are never too old for another bite. The FTC’s initial case seeking to unwind the earlier deals was dismissed, but its amended complaint has survived, and the case remains to be heard.
Back to the modification of the 2020 consent order, which famously set a record for privacy remedies: $5 billion, plus substantial behavioral remedies to run for 20 years (with the monetary penalty exceeding the EU’s highest by an order of magnitude). Then-Chair Joe Simons and then-Commissioners Noah Phillips and Christine Wilson accurately claimed that the settlement was “unprecedented, both in terms of the magnitude of the civil penalty and the scope of the conduct relief.” Two commissioners—Rebecca Slaughter and Rohit Chopra—dissented: they thought the unprecedented remedies inadequate.
I commend Chopra’s dissent, if only as an oddity. He rightly pointed out that the commissioners’ analysis of the penalty was “not empirically well grounded.” At no time did the commission produce an estimate of the magnitude of consumer harm, if any, underlying the record-breaking penalty. It never claimed to.
That’s odd enough. But then Chopra opined that “a rigorous analysis of unjust enrichment alone—which, notably, the Commission can seek without the assistance of the Attorney General—would likely yield a figure well above $5 billion.” That subjective likelihood also seemed to lack an empirical basis; certainly, Chopra provided none.
By all accounts, then, the remedies appeared to be wholly untethered from the magnitude of consumer harm wrought by the alleged violations. To be clear, I’m not disputing that Facebook violated the 2012 order, such that a 2019 complaint was warranted, even if I wonder now, as I wondered then, how a remedy that had nothing to do with the magnitude of harm could be an efficient one.
Now, Commissioner Alvaro Bedoya has issued a statement correctly acknowledging that “[t]here are limits to the Commission’s order modification authority.” Specifically, the commission must “identify a nexus between the original order, the intervening violations, and the modified order.” Bedoya wrote that he has “concerns about whether such a nexus exists” for one of the proposed modifications. He still voted to go ahead with the proposal, as did Slaughter and Chair Lina Khan, voicing no concerns at all.
It’s odder, still. In its heavily redacted order, the commission appears to ground its proposal in conduct alleged to have occurred before the 2020 order that it now seeks to modify. There are no intervening violations there. For example:
From December 2017 to July 2019, Respondent also made misrepresentations relating to its Messenger Kids (“MK”) product, a free messaging and video calling application “specifically intended for users under the age of 13.”
. . . [Facebook] represented that MK users could communicate in MK with only parent-approved contacts. However, [Facebook] made coding errors that resulted in children participating in group text chats and group video calls with unapproved contacts under certain circumstances.
Perhaps, but what circumstances? According to Meta (and the FTC), Meta discovered, corrected, and reported the coding errors to the FTC in 2019. Of course, Meta is bound to comply with the 2020 Consent Order. But were they bound to do so in 2019? They’ve always been subject to the FTC’s “unfair and deceptive acts and practices” (UDAP) authority, but why allege 2019 violations now?
What harm is being remedied? On the one hand, there seems to have been an inaccurate statement about something parents might care about: a representation that users could communicate in Messenger Kids only with parent-approved contacts. On the other hand, there’s no allegation that such communications (with approved contacts of the approved contacts) led to any harm to the kids themselves.
Given all of that, why does the commission seek to impose substantial new requirements on Meta? For example, the commission now seeks restrictions on Meta:
…collecting, using, selling, licensing, transferring, sharing, disclosing, or otherwise benefitting from Covered Information collected from Youth Users for the purposes of developing, training, refining, improving, or otherwise benefitting Algorithms or models; serving targeted advertising, or enriching Respondent’s data on Youth users.
There’s more, but that’s enough to have “concerns about” the existence of a nexus between the since-remedied coding errors and the proposed “modification.” Or to put it another way, I wonder what one has to do with the other.
The only violation alleged to have occurred after the 2020 consent order was finalized has to do with the initial 2021 report of the assessor—an FTC-approved independent monitor of Facebook/Meta’s compliance—covering the period from October 25, 2020 to April 22, 2021. There, the assessor reported that:
…the key foundational elements necessary for an effective [privacy] program are in place . . . [but] substantial additional work is required, and investments must be made, in order for the program to mature.
We don’t know what this amounts to. The initial assessment reported that the basic elements of the firm’s “comprehensive privacy program” were in place, but that substantial work remained. Did progress lag expectations? What were the failings? Were consumers harmed? Did Facebook/Meta fail to address deficiencies identified in the report? If so, for how long? We’re not told a thing.
Again, what’s the nexus? And why the requirement that Meta “delete Covered Information collected from a User as a Youth unless [Meta] obtains Affirmative Express Consent from the User within a reasonable time period, not to exceed six (6) months after the User’s eighteenth birthday”? That’s a worry, not because there’s nothing there, but because substantial additional costs are being imposed without any account of their nexus to consumer harm, supposing there is one.
Some might prefer such an opt-in policy—one of two that would be required under the proposed modification—but it’s not part of the 2020 consent agreement and it’s not otherwise part of U.S. law. It does resemble a requirement under the EU’s General Data Protection Regulation. But the GDPR is not U.S. law and there are good reasons for that— see, for example, here, here, here, and here.
For one thing, a required opt-in for all such information, in all the ways that it may live on in the firm’s data and models—can be onerous for users and not just the firm. Will young adults be spared concrete harms because of the requirement? It’s highly likely that they’ll have less access to information (and to less information), but highly unlikely that the reduction will be confined to that to which they (and their parents) would not consent. What will be the net effect?
Requirements “[p]rior to … introducing any new or modified products, services, or features” raise a question about the level of grain anticipated, given that limitations on the use of covered information apply to the training, refining, or improving of any algorithm or model, and that products, services, or features might be modified in various ways daily, or even in real time. Any such modifications require that the most recent independent assessment report find that all the many requirements of the mandated privacy program have been met. If not, then nothing new—including no modifications—is permitted until the assessor provides written confirmation that all material gaps and weaknesses have been “fully” remediated.
Is this supposed to entail independent oversight of every design decision involving information from youth users? Automated modifications? Or that everything come to a halt if any issues are reported? I gather that nobody—not even Meta—proposes to give the company carte blanche with youth information. But carte blanque?
As we’ve been discussing extensively at today’s International Center for Law & Economics event on congressional oversight of the commission, the FTC has a dual competition and consumer-protection enforcement mission. Efficient enforcement of the antitrust laws requires, among other things, that the costs of violations (including remedies) reflect the magnitude of consumer harm. That’s true for privacy, too. There’s no route to coherent—much less complementary—FTC-enforcement programs if consumer protection imposes costs that are wholly untethered from the harms it is supposed to address.