Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech.
In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.
While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby undermining long-standing conservative principles and the ability of conservatives to be treated fairly online.
There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated.
Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them.
It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.
Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional.
But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.
These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation.
This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.
Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.
It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about.
This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:
I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.
And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.
Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.
As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.
Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.
Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption.
The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.
More can be done about illegal conduct online
On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible.
By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.
In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.
In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses.
In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform.
So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.
Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise.
The complicated problem of encryption (and technology)
The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.
The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise.
To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.
If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.
Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act.
In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up.
Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.
I have noted in severalplaces before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.
With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).
Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.
The First Amendment restricts government action, not private action
The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook.
Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).
This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action.
For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting
Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.
Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”
The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform
Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.
There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:
The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.
Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo.
In Zhang v. Baidu.com, another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it.
None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.
The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”
In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy.
Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action.
First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press.
Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.
If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment.
Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.
Before the coronavirus, there was something I used to worry about. It was called screen time. Perhaps you remember it.
I thought about it. I wrote about it. A lot. I would try different digital detoxes as if they were fad diets, each working for a week or two before I’d be back on that smooth glowing glass.
Now I have thrown off the shackles of screen-time guilt. My television is on. My computer is open. My phone is unlocked, glittering. I want to be covered in screens. If I had a virtual reality headset nearby, I would strap it on.
Bowles isn’t alone. The Washington Post recently documented how social distancing has caused people to “rethink of one of the great villains of modern technology: screens.” Matthew Yglesias of Vox has been critical of tech in the past as well, but recently admitted that these tools are “making our lives much better.” Cal Newport might have called for Twitter to be shut down, but now thinks the service can be useful. These anecdotes speak to a larger trend. According to one national poll, some 88 percent of Americans now have a better appreciation for technology since this pandemic has forced them to rely upon it.
Most psychologists steer clear of using the term addiction because it means a person engages in hazardous use, shows tolerance, and neglects social roles. Because social media, gaming, and cell phone use don’t meet this threshold, the profession tends to describe those who experience negative impacts as engaging in problematic use of the tech, which is only applied to a small minority. According to one estimate, for example, only half of a percent of gamers have patterns of problematic use.
Even though tech use doesn’t meet the criteria for addiction, the term addiction finds purchase in policy discussions and media outlets because it suggests a healthier norm. Computer games have prosocial benefits, yet it is common to hear that the activity is no match for going outside to play. The same kind of argument exists with social media and phone use; face-to-face communication is preferred to tech-enabled communication.
But the coronavirus has inverted the normal conditions. Social distancing doesn’t allow us to connect in person or play outside with friends. Faced with no other alternative, technology has been embraced. Videoconferencing is up, as is social media use. This new norm has brought with it a needed rethink of critiques of tech. Even before this moment, however, the research on tech effects has had its problems.
To begin, even though it has been researched extensively, screen time and social media use aren’t shown to clearly cause harm. Earlier this year, psychologists Candice Odgers and Michaeline Jensen conducted a massive literature review and summarized the research as “a mix of often conflicting small positive, negative and null associations.” The researchers also point out that studies finding a negative relationship between well-being and tech use tend to be correlational, not causational, and thus are “unlikely to be of clinical or practical significance” to parents or therapists.
Through no fault of their own, researchers tend to focus a limited number of relationships when it comes to tech use. But professors Amy Orben and Andrew Przybylski were able to sidestep these problems by getting computers to test every theoretically defensible hypothesis. In a writeup appropriately titled “Beyond Cherry-Picking,” the duo explained why this method is important to policy makers:
Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context. In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.
Their academic paper throws cold water on the screen time and tech use debate. Since social media explains only 0.4% of the variation in well-being, much greater welfare gains can be made by concentrating on other policy issues. For example, regularly eating breakfast, getting enough sleep, and avoiding marijuana use play much larger roles in the well-being of adolescents. Social media is only a tiny portion of what determines well-being as the chart below helps to illustrate.
Second, most social media research relies on self-reporting methods, which are systematically biased and often unreliable. Communication professor Michael Scharkow, for example, compared self-reports of Internet use with the computer log files, which show everything that a computer has done and when, and found that “survey data are only moderately correlated with log file data.” A quartet of psychology professors in the UK discovered that self-reported smartphone use and social media addiction scales face similar problems in that they don’t correctly capture reality. Patrick Markey, Professor and Director of the IR Laboratory at Villanova University, summarized the work, “the fear of smartphones and social media was built on a castle made of sand.”
Expert bodies have also been changing their tune as well. The American Academy of Pediatrics took a hardline stance for years, preaching digital abstinence. But the organization has since backpedaled and now says that screens are fine in moderation. The organization now suggests that parents and children should work together to create boundaries.
Once this pandemic is behind us, policymakers and experts should reconsider the screen time debate. We need to move from loaded terms like addiction and embrace a more realistic model of the world. The truth is that everyone’s relationship with technology is complicated. Instead of paternalistic legislation, leaders should place the onus on parents and individuals to figure out what is right for them.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]
Across the globe, millions of people are rapidly coming to terms with the harsh realities of life under lockdown. As governments impose ever-greater social distancing measures, many of the daily comforts we took for granted are no longer available to us.
And yet, we can all take solace in the knowledge that our current predicament would have been far less tolerable if the COVID-19 outbreak had hit us twenty years ago. Among others, we have Big Tech firms to thank for this silver lining.
Contrary to the claims of critics, such as Senator Josh Hawley, Big Tech has produced game-changing innovations that dramatically improve our ability to fight COVID-19.
The previous post in this series showed that innovations produced by Big Tech provide us with critical information, allow us to maintain some level of social interactions (despite living under lockdown), and have enabled companies, universities and schools to continue functioning (albeit at a severely reduced pace).
But apart from information, social interactions, and online working (and learning); what has Big Tech ever done for us?
One of the most underappreciated ways in which technology (mostly pioneered by Big Tech firms) is helping the world deal with COVID-19 has been a rapid shift towards contactless economic transactions. Not only are consumers turning towards digital goods to fill their spare time, but physical goods (most notably food) are increasingly being exchanged without any direct contact.
These ongoing changes would be impossible without the innovations and infrastructure that have emerged from tech and telecommunications companies over the last couple of decades.
Of course, the overall picture is still bleak. The shift to contactless transactions has only slightly softened the tremendous blow suffered by the retail and restaurant industries – some predictions suggest their overall revenue could fall by at least 50% in the second quarter of 2020. Nevertheless, as explained below, this situation would likely be significantly worse without the many innovations produced by Big Tech companies. For that we would be thankful.
1. Food and other goods
For a start, the COVID-19 outbreak (and government measures to combat it) has caused many brick & mortar stores and restaurants to shut down. These closures would have been far harder to implement before the advent of online retail and food delivery platforms.
At the time of writing, e-commerce websites already appear to have witnessed a 20-30% increase in sales (other sources report 52% increase, compared to the same time last year). This increase will likely continue in the coming months.
The Amazon Retail platform has been at the forefront of this online shift.
Having witnessed a surge in online shopping, Amazon announced that it would be hiring 100.000 distribution workers to cope with the increased demand. Amazon’s staff have also been asked to work overtime in order to meet increased demand (in exchange, Amazon has doubled their pay for overtime hours).
To attract these new hires and ensure that existing ones continue working, Amazon simultaneously announced that it would be increasing wages in virus-hit countries (from $15 to $17, in the US) .
Amazon also stopped accepting “non-essential” goods in its warehouses, in order to prioritize the sale of household essentials and medical goods that are in high demand.
Finally, in Italy, Amazon decided not to stop its operations, despite some employees testing positive for COVID-19. Controversial as this move may be, Amazon’s private interests are aligned with those of society – maintaining the supply of essential goods is now more important than ever.
And it is not just Amazon that is seeking to fill the breach left temporarily by brick & mortar retail. Other retailers are also stepping up efforts to distribute their goods online.
The apps of traditional retail chains have witnessed record daily downloads (thus relying on the smartphone platforms pioneered by Google and Apple).
Walmart has become the go-to choice for online food purchases:
The shift to online shopping mimics what occurred in China, during its own COVID-19 lockdown.
According to an article published in HBR, e-commerce penetration reached 36.6% of retail sales in China (compared to 29.7% in 2019). The same article explains how Alibaba’s technology is enabling traditional retailers to better manage their supply chains, ultimately helping them to sell their goods online.
A study by Nielsen ratings found that 67% of retailers would expand online channels.
Spurred by compassion and/or a desire to boost its brand abroad, Alibaba and its founder, Jack Ma, have made large efforts to provide critical medical supplies (notably tests kits and surgical masks) to COVID-hit countries such as the US and Belgium.
And it is not just retail that is adapting to the outbreak. Many restaurants are trying to stay afloat by shifting from in-house dining to deliveries. These attempts have been made possible by the emergence of food delivery platforms, such as UberEats and Deliveroo.
These platforms have taken several steps to facilitate food deliveries during the outbreak.
Both UberEats and Deliveroo have put in place systems for deliveries to take place without direct physical contact. While not entirely risk-free, meal delivery can provide welcome relief to people experiencing stressful lockdown conditions.
Similarly, the shares of Blue Apron – an online meal-kit delivery service – have surged more than 600% since the start of the outbreak.
In short, COVID-19 has caused a drastic shift towards contactless retail and food delivery services. It is an open question how much of this shift would have been possible without the pioneering business model innovations brought about by Amazon and its online retail platform, as well as modern food delivery platforms, such as UberEats and Deliveroo. At the very least, it seems unlikely that it would have happened as fast.
The entertainment industry is another area where increasing digitization has made lockdowns more bearable. The reason is obvious: locked-down consumers still require some form of amusement. With physical supply chains under tremendous strain, and social gatherings no longer an option, digital media has thus become the default choice for many.
Data published by Verizon shows a sharp increase (in the week running from March 9 to March 16) in the consumption of digital entertainment, especially gaming:
This echoes other sources, which also report that the use of traditional streaming platforms has surged in areas hit by COVID-19.
Disney Plus has also been highly popular. According to one source, half of US homes with children under the age of 10 purchased a Disney Plus subscription. This trend is expected to continue during the COVID-19 outbreak. Disney even released Frozen II three months ahead of schedule in order to boost new subscriptions.
Hollywood studios have started releasing some of their lower-profile titles directly on streaming services.
Traffic has also increased significantly on popular gaming platforms.
These are just a tiny sample of the many ways in which digital entertainment is filling the void left by social gatherings. It is thus central to the lives of people under lockdown.
2. Cashless payments
But all of the services that are listed above rely on cashless payments – be it to limit the risk or contagion or because these transactions take place remotely. Fintech innovations have thus turned out to be one of the foundations that make social distancing policies viable.
This is particularly evident in the food industry.
Food delivery platforms, like UberEats and Deliveroo, already relied on mobile payments.
Costa coffee (a UK equivalent to starbucks) went cashless in an attempt to limit the spread of COVID-19.
Domino’s Pizza, among other franchises, announced that it would move to contactless deliveries.
President Donald Trump is said to have discussed plans to keep drive-thru restaurants open during the outbreak. This would also certainly imply exclusively digital payments.
And although doubts remain concerning the extent to which the SARS-CoV-2 virus may, or may not, be transmitted via banknotes and coins, many other businesses have preemptively ceased to accept cash payments.
As the Jodie Kelley – the CEO of the Electronic Transactions Association – put it, in a CNBC interview:
Contactless payments have come up as a new option for consumers who are much more conscious of what they touch.
This increased demand for cashless payments has been a blessing for Fintech firms.
Though it is too early to gage the magnitude of this shift, early signs – notably from China – suggest that mobile payments have become more common during the outbreak.
In China, Alipay announced that it expected to radically expand its services to new sectors – restaurants, cinema bookings, real estate purchases – in an attempt to compete with WeChat.
PayPal has also witnessed an uptick in transactions, though this growth might ultimately be weighed-down by declining economic activity.
In the past, Facebook had revealed plans to offer mobile payments across its platforms – Facebook, WhatsApp, Instagram & Libra. Those plans may not have been politically viable at the time. The COVID-19 could conceivably change this.
In short, the COVID-19 outbreak has increased our reliance on digital payments, as these can both take place remotely and, potentially, limit contamination via banknotes. None of this would have been possible twenty years ago when industry pioneers, such as PayPal, were in their infancy.
3. High speed internet access
Similarly, it goes without saying that none of the above would be possible without the tremendous investments that have been made in broadband infrastructure, most notably by internet service providers. Though these companies have often faced strong criticism from the public, they provide the backbone upon which outbreak-stricken economies can function.
By causing so many activities to move online, the COVID-19 outbreak has put broadband networks to the test. So for, broadband infrastructure around the world has been up to the task. This is partly because the spike in usage has occurred in daytime hours (where network’s capacity is less straine), but also because ISPs traditionally rely on a number of tools to limit peak-time usage.
The biggest increases in usage seem to have occurred in daytime hours. As data from OpenVault illustrates:
Anecdotal data also suggests that, so far, fixed internet providers have not significantly struggled to handle this increased traffic (the same goes for Content Delivery Networks). Not only were these networks already designed to withstand high peaks in demand, but ISPs have, such as Verizon, increased their capacity to avoid potential issues.
For instance, internet speed tests performed using Ookla suggest that average download speeds only marginally decreased, it at all, in locked-down regions, compared to previous levels:
However, the same data suggests that mobile networks have faced slightly larger decreases in performance, though these do not appear to be severe. For instance, contrary to contemporaneous reports, a mobile network outage that occurred in the UK is unlikely to have been caused by a COVID-related surge.
The robustness exhibited by broadband networks is notably due to long-running efforts by ISPs (spurred by competition) to improve download speeds and latency. As one article put it:
For now, cable operators’ and telco providers’ networks are seemingly withstanding the increased demands, which is largely due to the upgrades that they’ve done over the past 10 or so years using technologies such as DOCSIS 3.1 or PON.
Pushed in part by Google Fiber’s launch back in 2012, the large cable operators and telcos, such as AT&T, Verizon, Comcast and Charter Communications, have spent years upgrading their networks to 1-Gig speeds. Prior to those upgrades, cable operators in particular struggled with faster upload speeds, and the slowdown of broadband services during peak usage times, such as after school and in the evenings, as neighborhood nodes became overwhelmed.
This is not without policy ramifications.
For a start, these developments might vindicate antitrust enforcers that allowed mergers that led to higher investments, sometimes at the expense of slight reductions in price competition. This is notably the case for so-called 4 to 3 mergers in the wireless telecommunications industry. As an in-depth literature review by ICLE scholars concludes:
Studies of investment also found that markets with three facilities-based operators had significantly higher levels of investment by individual firms.
This may seem like a trivial problem, but it was totally avoidable. As a result of net neutrality regulation, European authorities and content providers have been forced into an awkward position (likely unfounded) that unnecessarily penalizes those consumers and ISPs who do not face congestion issues (conversely, it lets failing ISPs off the hook and disincentivizes further investments on their part). This is all the more unfortunate that, as argued above, streaming services are essential to locked-down consumers.
Critics may retort that small quality decreases hardly have any impact on consumers. But, if this is indeed the case, then content providers were using up unnecessary amounts of bandwidth before the COVID-19 outbreak (something that is less likely to occur without net neutrality obligations). And if not, then European consumers have indeed been deprived of something they valued. The shoe is thus on the other foot.
These normative considerations aside, the big point is that we can all be thankful to live in an era of high-speed internet.
4. Concluding remarks
Big Tech is rapidly emerging as one of the heroes of the COVID-19 crisis. Companies that were once on the receiving end of daily reproaches – by the press, enforcers, and scholars alike – are gaining renewed appreciation from the public. Times have changed since the early days of these companies – where consumers marvelled at the endless possibilities that their technologies offered. Today we are coming to realize how essential tech companies have become to our daily lives, and how they make society more resilient in the face of fat-tailed events, like pandemics.
The move to a contactless, digital, economy is a critical part of what makes contemporary societies better-equipped to deal with COVID-19. As this post has argued, online delivery, digital entertainment, contactless payments and high speed internet all play a critical role.
To think that we receive some of these services for free…
Last year, Erik Brynjolfsson, Avinash Collins and Felix Eggers published a paper in PNAS, showing that consumers were willing to pay significant sums for online goods they currently receive free of charge. One can only imagine how much larger those sums would be if that same experiment were repeated today.
The pandemic does not make any of the complaints about the tech giants less valid. They are still drivers of surveillance capitalism who duck their fair share of taxes and abuse their power in the marketplace. We in the press must still cover them aggressively and skeptically. And we still need a reckoning that protects the privacy of citizens, levels the competitive playing field, and holds these giants to account. But the momentum for that reckoning doesn’t seem sustainable at a moment when, to prop up our diminished lives, we are desperately dependent on what they’ve built. And glad that they built it.
While it is still early to draw policy lessons from the outbreak, one thing seems clear: the COVID-19 pandemic provides yet further evidence that tech policymakers should be extremely careful not to kill the goose that laid the golden egg, by promoting regulations that may thwart innovation (or the opposite).
Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:
The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.
If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.
The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).
A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.
(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries
(3) Microsoft: Google, Facebook, Amazon
Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.
Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.
In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.
Competition for the Market
In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):
When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!
But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.
Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.
In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.
As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:
We see these S-curves in the technology industry all the time:
As Christensen explains in the Innovator’s Solution, consumer needs can
be thought of as “jobs-to-be-done.” Early on, when a product is just good
enough to get a job done, firms compete on product quality and pursue an
integrated strategy — designing, manufacturing, and distributing the product
in-house. As the underlying technology improves and the product overshoots the
needs of the jobs-to-be-done, products become modular and the primary dimension
of competition moves to cost and convenience. As this cycle repeats itself,
companies are either bundling different modules together to create more
integrated products or unbundling integrated products to create more modular
Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:
When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.
Exponentially smaller integrated circuits can
be combined with new user interfaces and networks to create new computer
classes, which themselves represent the opportunity for disruption.
Bell’s Law of Computer Classes
A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.
Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.
This is the third in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here, and the second here). It draws on research from a soon-to-be published ICLE white paper.
(Comparison of Google and Apple’s smartphone business models. Red $ symbols represent money invested; Green $ symbols represent sources of revenue; Black lines show the extent of Google and Apple’s control over their respective platforms)
For the third in my series of posts about the Google Android decision, I will delve into the theories of harm identified by the Commission.
The big picture is that the Commission’s analysis was particularly one-sided. The Commission failed to adequately account for the complex business challenges that Google faced – such as monetizing the Android platform and shielding it from fragmentation. To make matters worse, its decision rests on dubious factual conclusions and extrapolations. The result is a highly unbalanced assessment that could ultimately hamstring Google and prevent it from effectively competing with its smartphone rivals, Apple in particular.
1. Tying without foreclosure
The first theory of harm identified by the Commission concerned the tying of Google’s Search app with the Google Play app, and of Google’s Chrome app with both the Google Play and Google Search apps.
Oversimplifying, Google required its OEMs to choose between either pre-installing a bundle of Google applications, or forgoing some of the most important ones (notably Google Play). The Commission argued that this gave Google a competitive advantage that rivals could not emulate (even though Google’s terms did not preclude OEMs from simultaneously pre-installing rival web browsers and search apps).
To support this conclusion, the Commission notably asserted that no alternative distribution channel would enable rivals to offset the competitive advantage that Google obtained from tying. This finding is, at best, dubious.
For a start, the Commission claimed that user downloads were not a viable alternative distribution channel, even though roughly 250 million apps are downloaded on Google’s Play store every day.
The Commission sought to overcome this inconvenient statistic by arguing that Android users were unlikely to download apps that duplicated the functionalities of a pre-installed app – why download a new browser if there is already one on the user’s phone?
But this reasoning is far from watertight. For instance, the 17th most-downloaded Android app, the “Super-Bright Led Flashlight” (with more than 587million downloads), mostly replicates a feature that is pre-installed on all Android devices. Moreover, the five most downloaded Android apps (Facebook, Facebook Messenger, Whatsapp, Instagram and Skype) provide functionalities that are, to some extent at least, offered by apps that have, at some point or another, been preinstalled on many Android devices (notably Google Hangouts, Google Photos and Google+).
The Commission countered that communications apps were not appropriate counterexamples, because they benefit from network effects. But this overlooks the fact that the most successful communications and social media apps benefited from very limited network effects when they were launched, and that they succeeded despite the presence of competing pre-installed apps. Direct user downloads are thus a far more powerful vector of competition than the Commission cared to admit.
Similarly concerning is the Commission’s contention that paying OEMs or Mobile Network Operators (“MNOs”) to pre-install their search apps was not a viable alternative for Google’s rivals. Some of the reasons cited by the Commission to support this finding are particularly troubling.
For instance, the Commission claimed that high transaction costs prevented parties from concluding these pre installation deals.
But pre-installation agreements are common in the smartphone industry. In recent years, Microsoft struck a deal with Samsung to pre-install some of its office apps on the Galaxy Note 10. It also paid Verizon to pre-install the Bing search app on a number of Samsung phones, in 2010. Likewise, a number of Russian internet companies have been in talks with Huawei to pre-install their apps on its devices. And Yahoo reached an agreement with Mozilla to make it the default search engine for its web browser. Transaction costs do not appear to have been an obstacle in any of these cases.
The Commission also claimed that duplicating too many apps would cause storage space issues on devices.
And yet, a back-of-the-envelope calculation suggests that storage space is unlikely to be a major issue. For instance, the Bing Search app has a download size of 24MB, whereas typical entry-level smartphones generally have an internal memory of at least 64GB (that can often be extended to more than 1TB with the addition of an SD card). The Bing Search app thus takes up less than one-thousandth of these devices’ internal storage. Granted, the Yahoo search app is slightly larger than Microsoft’s, weighing almost 100MB. But this is still insignificant compared to a modern device’s storage space.
Finally, the Commission claimed that rivals were contractually prevented from concluding exclusive pre-installation deals because Google’s own apps would also be pre-installed on devices.
However, while it is true that Google’s apps would still be present on a device, rivals could still pay for their applications to be set as default. Even Yandex – a plaintiff – recognized that this would be a valuable solution. In its own words (taken from the Commission’s decision):
Pre-installation alongside Google would be of some benefit to an alternative general search provider such as Yandex […] given the importance of default status and pre-installation on home screen, a level playing field will not be established unless there is a meaningful competition for default status instead of Google.
In short, the Commission failed to convincingly establish that Google’s contractual terms prevented as-efficient rivals from effectively distributing their applications on Android smartphones. The evidence it adduced was simply too thin to support anything close to that conclusion.
2. The threat of fragmentation
The Commission’s second theory of harm concerned the so-called “antifragmentation” agreements concluded between Google and OEMs. In a nutshell, Google only agreed to license the Google Search and Google Play apps to OEMs that sold “Android Compatible” devices (i.e. devices sold with a version of Android did not stray too far from Google’s most recent version).
According to Google, this requirement was necessary to limit the number of Android forks that were present on the market (as well as older versions of the standard Android). This, in turn, reduced development costs and prevented the Android platform from unraveling.
The Commission disagreed, arguing that Google’s anti-fragmentation provisions thwarted competition from potential Android forks (i.e. modified versions of the Android OS).
This conclusion raises at least two critical questions: The first is whether these agreements were necessary to ensure the survival and competitiveness of the Android platform, and the second is why “open” platforms should be precluded from partly replicating a feature that is essential to rival “closed” platforms, such as Apple’s iOS.
Let us start with the necessity, or not, of Google’s contractual terms. If fragmentation did indeed pose an existential threat to the Android ecosystem, and anti-fragmentation agreements averted this threat, then it is hard to make a case that they thwarted competition. The Android platform would simply not have been as viable without them.
The Commission dismissed this possibility, relying largely on statements made by Google’s rivals (many of whom likely stood to benefit from the suppression of these agreements). For instance, the Commission cited comments that it received from Yandex – one of the plaintiffs in the case:
(1166) The fact that fragmentation can bring significant benefits is also confirmed by third-party respondents to requests for information:
(2) Yandex, which stated: “Whilst the development of Android forks certainly has an impact on the fragmentation of the Android ecosystem in terms of additional development being required to adapt applications for various versions of the OS, the benefits of fragmentation outweigh the downsides…”
Ironically, the Commission relied on Yandex’s statements while, at the same time, it dismissed arguments made by Android app developers, on account that they were conflicted. In its own words:
Google attached to its Response to the Statement of Objections 36 letters from OEMs and app developers supporting Google’s views about the dangers of fragmentation […] It appears likely that the authors of the 36 letters were influenced by Google when drafting or signing those letters.
More fundamentally, the Commission’s claim that fragmentation was not a significant threat is at odds with an almost unanimous agreement among industry insiders.
For example, while it is not dispositive, a rapid search for the terms “Google Android fragmentation”, using the DuckDuckGo search engine, leads to results that cut strongly against the Commission’s conclusions. Of the ten first results, only one could remotely be construed as claiming that fragmentation was not an issue. The others paint a very different picture (below are some of the most salient excerpts):
“There’s a fairly universal perception that Android fragmentation is a barrier to a consistent user experience, a security risk, and a challenge for app developers.” (here)
“Android fragmentation, a problem with the operating system from its inception, has only become more acute an issue over time, as more users clamor for the latest and greatest software to arrive on their phones.” (here)
“Android Fragmentation a Huge Problem: Study.” (here)
“Google’s Android fragmentation fix still isn’t working at all.” (here)
“Does Google care about Android fragmentation? Not now—but it should.” (here).
“This is very frustrating to users and a major headache for Google… and a challenge for corporate IT,” Gold said, explaining that there are a large number of older, not fully compatible devices running various versions of Android.” (here)
Perhaps more importantly, one might question why Google should be treated differently than rivals that operate closed platforms, such as Apple, Microsoft and Blackberry (before the last two mostly exited the Mobile OS market). By definition, these platforms limit all potential forks (because they are based on proprietary software).
The Commission argued that Apple, Microsoft and Blackberry had opted to run “closed” platforms, which gave them the right to prevent rivals from copying their software.
While this answer has some superficial appeal, it is incomplete. Android may be an open source project, but this is not true of Google’s proprietary apps. Why should it be forced to offer them to rivals who would use them to undermine its platform? The Commission did not meaningfully consider this question.
And yet, industry insiders routinely compare the fragmentation of Apple’s iOS and Google’s Android OS, in order to gage the state of competition between both firms. For instance, one commentator noted:
[T]he gap between iOS and Android users running the latest major versions of their operating systems has never looked worse for Google.
Likewise, an article published in Forbes concluded that Google’s OEMs were slow at providing users with updates, and that this might drive users and developers away from the Android platform:
For many users the Android experience isn’t as up-to-date as Apple’s iOS. Users could buy the latest Android phone now and they may see one major OS update and nothing else. […] Apple users can be pretty sure that they’ll get at least two years of updates, although the company never states how long it intends to support devices.
However this problem, in general, makes it harder for developers and will almost certainly have some inherent security problems. Developers, for example, will need to keep pushing updates – particularly for security issues – to many different versions. This is likely a time-consuming and expensive process.
To recap, the Commission’s decision paints a world that is either black or white: either firms operate closed platforms, and they are then free to limit fragmentation as they see fit, or they create open platforms, in which case they are deemed to have accepted much higher levels of fragmentation.
This stands in stark contrast to industry coverage, which suggests that users and developers of both closed and open platforms care a great deal about fragmentation, and demand that measures be put in place to address it. If this is true, then the relative fragmentation of open and closed platforms has an important impact on their competitive performance, and the Commission was wrong to reject comparisons between Google and its closed ecosystem rivals.
3. Google’s revenue sharing agreements
The last part of the Commission’s case centered on revenue sharing agreements between Google and its OEMs/MNOs. Google paid these parties to exclusively place its search app on the homescreen of their devices. According to the Commission, these payments reduced OEMs and MNOs’ incentives to pre-install competing general search apps.
However, to reach this conclusion, the Commission had to make the critical (and highly dubious) assumption that rivals could not match Google’s payments.
To get to that point, it notably assumed that rival search engines would be unable to increase their share of mobile search results beyond their share of desktop search results. The underlying intuition appears to be that users who freely chose Google Search on desktop (Google Search & Chrome are not set as default on desktop PCs) could not be convinced to opt for a rival search engine on mobile.
But this ignores the possibility that rivals might offer an innovative app that swayed users away from their preferred desktop search engine.
More importantly, this reasoning cuts against the Commission’s own claim that pre-installation and default placement were critical. If most users, dismiss their device’s default search app and search engine in favor of their preferred ones, then pre-installation and default placement are largely immaterial, and Google’s revenue sharing agreements could not possibly have thwarted competition (because they did not prevent users from independently installing their preferred search app). On the other hand, if users are easily swayed by default placement, then there is no reason to believe that rivals could not exceed their desktop market share on mobile phones.
The Commission was also wrong when it claimed that rival search engines were at a disadvantage because of the structure of Google’s revenue sharing payments. OEMs and MNOs allegedly lost all of their payments from Google if they exclusively placed a rival’s search app on the home screen of a single line of handsets.
The key question is the following: could Google automatically tilt the scales to its advantage by structuring the revenue sharing payments in this way? The answer appears to be no.
For instance, it has been argued that exclusivity may intensify competition for distribution. Conversely, other scholars have claimed that exclusivity may deter entry in network industries. Unfortunately, the Commission did not examine whether Google’s revenue sharing agreements fell within this category.
It thus provided insufficient evidence to support its conclusion that the revenue sharing agreements reduced OEMs’ (and MNOs’) incentives to pre-install competing general search apps, rather than merely increasing competition “for the market”.
To summarize, the Commission overestimated the effect that Google’s behavior might have on its rivals. It almost entirely ignored the justifications that Google put forward and relied heavily on statements made by its rivals. The result is a one-sided decision that puts undue strain on the Android Business model, while providing few, if any, benefits in return.
Congress needs help understanding the fast moving world of technology. That help is not going to arise by reviving the Office of Technology Assessment (“OTA”), however. The OTA is an idea for another age, while the tweaks necessary to shore up the existing technology resources available to Congress are relatively modest.
Although a new OTAis unlikely to be harmful, it would entail the expenditure of additional resources, including the political capital necessary to create a new federal agency, along with all the revolving-door implications that entails.
The real problem with revising the OTA is that it distracts Congress from considering that it needs to be more than merely well-informed. What we need is both smarter regulation as well as regulation better tailored to 21st century technology and the economy. A new OTA might help with the former problem, but may in fact only exacerbate the latter problem.
The OTA is a poor fit for the modern world
The OTA began existence in 1972, with a mission to provide science and technology advice to Congress. It was closed in 1995, following budget cuts. Lately, some well meaning folks — including even some presidential hopefuls — have sought to revive the OTA.
To the extent that something like the OTA would be salutary today, it would be as a check on incorrect technologically and scientifically based assumptions contained in proposed legislation. For example, in the 90s the OTA provided useful technical information to Congress about how encryption technologies worked as it was considering legislation such as CALEA.
Yet there is good reason to believe that a new legislative-branch agency would not outperform the alternatives to these functions available today. A recent study from the National Academy of Public Administration (“NAPA”), undertaken at the request of Congress and the Congressional Research Service, summarized the OTA’s poor fit for today’s legislative process.
A new OTA “would have similar vulnerabilities that led to the dis-establishment of the [original] OTA.” While a new OTA could provide some information and services to Congress, “such services are not essential for legislators to actually craft legislation, because Congress has multiple sources for [Science and Technology] information/analysis already and can move legislation forward without a new agency.” Moreover, according to interviewed legislative branch personnel, the original OTA’s reports “were not critical parts of the legislative deliberation and decision-making processes during its existence.”
A new [OTA] conducting helpful but not essential work would struggle to integrate into the day-to-day legislative activities of Congress, and thus could result in questions of relevancy and leave it potentially vulnerable to political challenges
The NAPA report found that the Congressional Research Service (“CRS”) and the Government Accountability Office (“GAO”) already contained most of the resources that Congress needed. The report recommended enhancing those existing resources, and the creation of a science and technology coordinator position in Congress in order to facilitate the hiring of appropriate personnel for committees, among other duties.
The one gap identified by the NAPA report is that Congress currently has no “horizon scanning” capability to look at emerging trends in the long term. This was an original function of OTA.
According to Peter D. Blair, in his book Congress’s Own Think Tank – Learning from the Legacy of the Office of Technology Assessment, an original intention of the OTA was to “provide an ‘early warning’ on the potential impacts of new technology.” (p. 43). But over time, the agency, facing the bureaucratic incentive to avoid political controversy, altered its behavior and became carefully “responsive to congressional needs” (p. 51) — which is a polite way of saying that the OTA’s staff came to see their purpose as providing justification for Congress to enact desired legislation and to avoid raising concerns that could be an impediment to that legislation. The bureaucratic pressures facing the agency forced a mission drift that would be highly likely to recur in a new OTA.
The NAPA report, however, has its own recommendation that does not involve the OTA: allow the newly created science and technology coordinator to create annual horizon-scanning reports.
A new OTA unnecessarily increases the surface area for regulatory capture
Apart from the likelihood that the OTA will be a mere redundancy, the OTA presents yet another vector for regulatory capture (or at least endless accusations of regulatory capture used to undermine its work). Andrew Yang inadvertently points to this fact on his campaign page that calls for a revival of the OTA:
This vital institution needs to be revived, with a budget large enough and rules flexible enough to draw top talent away from the very lucrative private sector.
Yang’s wishcasting aside, there is just no way that you are going to create an institution with a “budget large enough and rules flexible enough” to permanently siphon off top-tier talent from multi-multi-billion dollar firms working on creating cutting edge technologies. What you will do is create an interesting, temporary post-graduate school or mid-career stop-over point where top-tier talent can cycle in and out of those top firms. These are highly intelligent, very motivated individuals who want to spend their careers making stuff, not writing research reports for congress.
The same experts who are sufficiently high-level enough to work at the OTA will be similarly employable by large technology and scientific firms. The revolving door is all but inevitable.
The real problem to solve is a lack of modern governance
Lack of adequate information per se is not the real problem facing members of Congress today. The real problem is that, for the most part, legislators neither understand nor seem to care about how best to govern and establish regulatory frameworks for new technology. As a result, Congress passes laws that threaten to slow down the progress of technological development, thus harming consumers while protecting incumbents.
Assuming for the moment that there is some kind of horizon-scanning capability that a new OTA could provide, it necessarily fails, even on these terms. By the time Congress is sufficiently alarmed by a new or latent “problem” (or at least a politically relevant feature) of technology, the industry or product under examination has most likely already progressed far enough in its development that it’s far too late for Congress to do anything useful. Even though the NAPA report’s authors seem to believe that a “horizon scanning” capability will help, in a dynamic economy, truly predicting the technology that will impact society seems a bit like trying to predict the weather on a particular day a year hence.
Further, the limits of human cognition restrict the utility of “more information” to the legislative process. Will Rinehart discussed this quite ably, pointing to the psychological literature that indicates that, in many cases involving technical subjects, more information given to legislators only makes them overconfident. That is to say, they can cite more facts, but put less of them to good use when writing laws.
The truth is, no degree of expertise will ever again provide an adequate basis for producing prescriptive legislation meant to guide an industry or segment. The world is simply moving too fast.
It would be far more useful for Congress to explore legislation that encourages the firms involved in highly dynamic industries to develop and enforce voluntary standards that emerge as a community standards. See, for example, the observation offered by Jane K. Winn in her paper on information governance and privacy law that
[i]n an era where the ability to compete effectively in global markets increasingly depends on the advantages of extracting actionable insights from petabytes of unstructured data, the bureaucratic individual control right model puts a straightjacket on product innovation and erects barriers to fostering a culture of compliance.
Winn is thinking about what a “governance” response to privacy and crises like the Cambridge Analytica scandal should be, and posits those possibilities against the top-down response of the EU with its General Data Protection Directive (“GDPR”). She notes that preliminary research on GDPR suggests that framing privacy legislation as bureaucratic control over firms using consumer data can have the effect of removing all of the risk-management features that the private sector is good at developing.
Instead of pursuing legislative agendas that imagine the state as the all-seeing eye at the top of the of a command-and-control legislative pyramid, lawmakers should seek to enable those with relevant functional knowledge to employ that knowledge for good governance, broadly understood:
Reframing the information privacy law reform debate as the process of constructing new information governance institutions builds on decades of American experience with sector-specific, risk based information privacy laws and more than a century of American experience with voluntary, consensus standard-setting processes organized by the private sector. The turn to a broader notion of information governance reflects a shift away from command-and-control strategies and toward strategies for public-private collaboration working to protect individual, institutional and social interests in the creation and use of information.
The implications for a new OTA are clear. The model of “gather all relevant information on a technical subject to help construct a governing code” was, if ever, best applied to a world that moved at an industrial era pace. Today, governance structures need to be much more flexible, and the work of an OTA — even if Congress didn’t already have most of its advisory bases covered — has little relevance.
The engineers working at firms developing next generation technologies are the individuals with the most relevant, timely knowledge. A forward looking view of regulation would try to develop a means for the information these engineers have to surface and become an ongoing part of the governing standards.
*note – This post originally said that OTA began “operating” in 1972. I meant to say it began “existence” in 1972. I have corrected the error.
This is the second in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision (the first post can be found here). It draws on research from a soon-to-be published ICLE white paper.
This improper market definition might not be so problematic if the Commission had then proceeded to undertake a detailed (and balanced) assessment of the competitive conditions that existed in the markets where Google operates (including the competitive constraints imposed by Apple).
Unfortunately, this was not the case. The following paragraphs respond to some of the Commission’s most problematic arguments regarding the existence of barriers to entry, and the absence of competitive constraints on Google’s behavior.
The overarching theme is that the Commission failed to quantify its findings and repeatedly drew conclusions that did not follow from the facts cited. As a result, it was wrong to conclude that Google faced little competitive pressure from Apple and other rivals.
1. Significant investments and network effects ≠ barriers to entry
In its decision, the Commission notably argued that significant investments (millions of euros) are required to set up a mobile OS and App store. It also argued that market for licensable mobile operating systems gave rise to network effects.
But contrary to the Commission’s claims, neither of these two factors is, in and of itself, sufficient to establish the existence of barriers to entry (even under EU competition law’s loose definition of the term, rather than Stigler’s more technical definition)
Take the argument that significant investments are required to enter the mobile OS market.
The main problem is that virtually every market requires significant investments on the part of firms that seek to enter. Not all of these costs can be seen as barriers to entry, or the concept would lose all practical relevance.
For example, purchasing a Boeing 737 Max airplane reportedly costs at least $74 million. Does this mean that incumbents in the airline industry are necessarily shielded from competition? Of course not.
Instead, the relevant question is whether an entrant with a superior business model could access the capital required to purchase an airplane and challenge the industry’s incumbents.
Returning to the market for mobile OSs, the Commission should thus have questioned whether as-efficient rivals could find the funds required to produce a mobile OS. If the answer was yes, then the investments highlighted by the Commission were largely immaterial. As it happens, several firms have indeed produced competing OSs, including CyanogenMod, LineageOS and Tizen.
The same is true of Commission’s conclusion that network effects shielded Google from competitors. While network effects almost certainly play some role in the mobile OS and app store markets, it does not follow that they act as barriers to entry in competition law terms.
As Paul Belleflamme recently argued, it is a myth that network effects can never be overcome. And as I have written elsewhere, the most important question is whether users could effectively coordinate their behavior and switch towards a superior platform, if one arose (See also Dan Spulber’s excellent article on this point).
The Commission completely ignored this critical interrogation during its discussion of network effects.
2. The failure of competitors is not proof of barriers to entry
Just as problematically, the Commission wrongly concluded that the failure of previous attempts to enter the market was proof of barriers to entry.
This is the epitome of the Black Swan fallacy (i.e. inferring that all swans are white because you have never seen a relatively rare, but not irrelevant, black swan).
The failure of rivals is equally consistent with any number of propositions:
There were indeed barriers to entry;
Google’s products were extremely good (in ways that rivals and the Commission failed to grasp);
Google responded to intense competitive pressure by continuously improving its product (and rivals thus chose to stay out of the market);
The Commission did not demonstrate that its own inference was the right one, nor did it even demonstrate any awareness that other explanations were at least equally plausible.
3. First mover advantage?
Much of the same can be said about the Commission’s observation that Google enjoyed a first mover advantage.
The elephant in the room is that Google was not the first mover in the smartphone market (and even less so in the mobile phone industry). The Commission attempted to sidestep this uncomfortable truth by arguing that Google was the first mover in the Android app store market. It then concluded that Google had an advantage because users were familiar with Android’s app store.
To call this reasoning “naive” would be too kind. Maybe consumers are familiar with Google’s products today, but they certainly weren’t when Google entered the market.
Why would something that did not hinder Google (i.e. users’ lack of familiarity with its products, as opposed to those of incumbents such as Nokia or Blackberry) have the opposite effect on its future rivals?
Moreover, even if rivals had to replicate Android’s user experience (and that of its app store) to prove successful, the Commission did not show that there was anything that prevented them from doing so — a particularly glaring omission given the open-source nature of the Android OS.
The result is that, at best, the Commission identified a correlation but not causality. Google may arguably have been the first, and users might have been more familiar with its offerings, but this still does not prove that Android flourished (and rivals failed) because of this.
4. It does not matter that users “do not take the OS into account” when they purchase a device
The Commission also concluded that alternatives to Android (notably Apple’s iOS and App Store) exercised insufficient competitive constraints on Google. Among other things, it argued that this was because users do not take the OS into account when they purchase a smartphone (so Google could allegedly degrade Android without fear of losing users to Apple)..
In doing so, the Commission failed to grasp that buyers might base their purchases on a devices’ OS without knowing it.
Some consumers will simply follow the advice of a friend, family member or buyer’s guide. Acutely aware of their own shortcomings, they thus rely on someone else who does take the phone’s OS into account.
But even when they are acting independently, unsavvy consumers may still be driven by technical considerations. They might rely on a brand’s reputation for providing cutting edge devices (which, per the Commission, is the most important driver of purchase decisions), or on a device’s “feel” when they try it in a showroom. In both cases, consumers’ choices could indirectly be influenced by a phone’s OS.
In more technical terms, a phone’s hardware and software are complementary goods. In these settings, it is extremely difficult to attribute overall improvements to just one of the two complements. For instance, a powerful OS and chipset are both equally necessary to deliver a responsive phone. The fact that consumers may misattribute a device’s performance to one of these two complements says nothing about their underlying contribution to a strong end-product (which, in turn, drives purchase decisions). Likewise, battery life is reportedly one of the most important features for users, yet few realize that a phone’s OS has a large impact on it.
Finally, if consumers were really indifferent to the phone’s operating system, then the Commission should have dropped at least part of its case against Google. The Commission’s claim that Google’s anti-fragmentation agreements harmed consumers (by reducing OS competition) has no purchase if Android is provided free of charge and consumers are indifferent to non-price parameters, such as the quality of a phone’s OS.
5. Google’s users were not “captured”
Finally, the Commission claimed that consumers are loyal to their smartphone brand and that competition for first time buyers was insufficient to constrain Google’s behavior against its “captured” installed base.
It notably found that 82% of Android users stick with Android when they change phones (compared to 78% for Apple), and that 75% of new smartphones are sold to existing users.
The Commission asserted, without further evidence, that these numbers proved there was little competition between Android and iOS.
But is this really so? In almost all markets consumers likely exhibit at least some loyalty to their preferred brand. At what point does this become an obstacle to interbrand competition? The Commission offered no benchmark mark against which to assess its claims.
And although inter-industry comparisons of churn rates should be taken with a pinch of salt, it is worth noting that the Commission’s implied 18% churn rate for Android is nothing out of the ordinary (see, e.g., here, here, and here), including for industries that could not remotely be called anticompetitive.
To make matters worse, the Commission’s own claimed figures suggest that a large share of sales remained contestable (roughly 39%).
Imagine that, every year, 100 devices are sold in Europe (75 to existing users and 25 to new users, according to the Commission’s figures). Imagine further that the installed base of users is split 76–24 in favor of Android. Under the figures cited by the Commission, it follows that at least 39% of these sales are contestable.
According to the Commission’s figures, there would be 57 existing Android users (76% of 75) and 18 Apple users (24% of 75), of which roughly 10 (18%) and 4 (22%), respectively, switch brands in any given year. There would also be 25 new users who, even according to the Commission, do not display brand loyalty. The result is that out of 100 purchasers, 25 show no brand loyalty and 14 switch brands. And even this completely ignores the number of consumers who consider switching but choose not to after assessing the competitive options.
In short, the preceding paragraphs argue that the Commission did not meet the requisite burden of proof to establish Google’s dominance. Of course, it is one thing to show that the Commission’s reasoning was unsound (it is) and another to establish that its overall conclusion was wrong.
At the very least, I hope these paragraphs will convey a sense that the Commission loaded the dice, so to speak. Throughout the first half of its lengthy decision, it interpreted every piece of evidence against Google, drew significant inferences from benign pieces of information, and often resorted to circular reasoning.
The following post in this blog series argues that these errors also permeate the Commission’s analysis of Google’s allegedly anticompetitive behavior.
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.
In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.
“As usual,” Lord Macaulay wrote
in his history of England, “many persons” were “disposed to clamour against the
innovation, simply because it was an innovation.” They objected that the express
rides would corrupt traditional horsemanship, throw saddlers and boatmen out of
work, bankrupt the roadside taverns, and force travelers to sit with children
and the disabled. “It was gravely recommended,” reported Macaulay, by various
towns and companies, that “no public coach should be permitted to have more
than four horses, to start oftener that once a week, or to go more than thirty
miles a day.”
Macaulay used the episode to offer his
contemporaries a warning. Although “we smile at these things,” he said, “our
descendants, when they read the history of the opposition offered by cupidity
and prejudice to the improvements of the nineteenth century, may smile in their
turn.” Macaulay wanted the smart set to take a wider view of history.
They rarely do. It is not in their nature. As
Schumpeter understood, the “intellectual group” cannot help attacking “the
foundations of capitalist society.” “It lives on criticism and its whole
position depends on criticism that stings.”
An aspiring intellectual would do well to avoid restraint
or good cheer. Better to build on a foundation of panic and indignation. Want
to sell books and appear on television? Announce the “death” of this or a
“crisis” over that. Want to seem fashionable among other writers, artists, and
academics? Denounce greed and rail against “the system.”
New technology is always a good target. When a
lantern inventor obtained a patent to light London, observed Macaulay, “the
cause of darkness was not left undefended.” The learned technophobes have been especially
vexed lately. The largest tech companies, they protest, are manipulating us.
“remade the internet in its hideous image.” The
New Yorker wonders
whether the platform is going to “break democracy.”
Apple is no better. “Have smartphones destroyed a
generation?” asksThe Atlantic in a cover-story
headline. The article’s author, Jean Twenge, says smartphones have made the
young less independent, more reclusive, and more depressed. She claims that
today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis
in decades.” “Much of this deterioration,” she contends, “can be traced to
And then there’s Amazon. It’s too efficient. Alex
in Fortune that “too many clicks, too
much time spent, and too much money spent on Amazon” is “bad for our collective
financial, psychological, and physical health.”
Here’s a rule of thumb for the refined cultural
critic to ponder. When the talking points you use to convey your depth and perspicacity
match those of a sermonizing Republican senator, start worrying that your pseudo-profound
TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn
fears of novelty and change.
Enter Josh Hawley, freshman GOP senator from
Missouri. Hawley claims
that Facebook is a “digital drug” that “dulls” attention spans and “frays”
relationships. He speculates about whether social media is causing teenage
girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists,
is “ever more sophisticated exploitation of people.” He scolds the tech
companies for failing to produce products that—in his judgment—“enrich lives” and
As for the stuff the industry does make, Hawley wants
it changed. He has introduced
a bill to ban infinite scrolling, music and video autoplay, and the use of “badges
and other awards” (gamification) on social media. The bill also requires defaults
that limit a user’s time on a platform to 30 minutes a day. A user could opt
out of this restriction, but only for a month at a stretch.
The available evidence does not bear out the notion
that highbrow magazines, let alone Josh Hawley, should redesign tech products
and police how people use their time. You’d probably have to pay
someone around $500 to stay off Facebook for a year.
Getting her to forego using Amazon would cost even more. And Google is worth
more still—perhaps thousands of dollars per user per year. These figures are of
course quite rough, but that just proves the point: the consumer surplus created
by the internet is inestimable.
Is technology making teenagers sad? Probably not. A
recent study tracked the social-media use, along with the wellbeing, of around
ten-thousand British children for almost a decade. “In more than half of the
thousands of statistical models we tested,” the study’s authors write,
“we found nothing more than random statistical noise.” Although there were some
small links between teenage girls’ mood and their social-media use, the
connections were “miniscule” and too “trivial” to “inform personal parenting
decisions.” “It’s probably best,” the researchers conclude, “to retire the idea
that the amount of time teens spend on social media is a meaningful metric
influencing their wellbeing.”
One could head the other way, in fact, and argue
that technology is making children smarter. Surfing the web and playing video
broaden their attention spans and improve their abstract thinking.
Is Facebook a threat to democracy? Not yet. The
memes that Russian trolls distributed during the 2016 election were clumsy,
garish, illiterate piffle. Most of it was the kind of thing that only an Alex
Jones fan or a QAnon conspiracist would take seriously. And sure enough, one
study finds that only a
tiny fraction of voters, most of them older
conservatives, read and spread the material. It appears, in other words, that the
Russian fake news and propaganda just bounced
around among a few wingnuts whose support for Donald
Trump was never in doubt.
Over time, it is fair to say, the known costs and
benefits of the latest technological innovations could change. New data and
further study might reveal that the handwringers are on to something. But there’s
good news: if you have fears, doubts, or objections, nothing stops you from
acting on them. If you believe that Facebook’s behavior
is intolerable, or that its impact on society is malign, stop using it. If you
think Amazon is undermining small businesses, shop more at local stores. If you
fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you
suspect that everything has gone pear-shaped since the Industrial Revolution
started, throw out your refrigerator and stop going to the dentist.
We now hit the crux of the intellectuals’ (and Josh
Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim,
weak, and base. You lack the wits to use an iPhone on your own terms. You lack
the self-control to post, “like”, and share in moderation (or the discipline to
make your children follow suit). You lack the virtue to abstain from the
pleasures of Prime-membership consumerism.
One AI researcher digs to the root. “It is only the
hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or
‘I’m not on social media,’” she tellsVox. No one wields the “privilege” epithet
quite like the modern privileged do. It is one of the remarkable features of
our time. Pundits and professors use the word to announce, albeit
unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection
from too much information, too much choice, too much freedom.
There’s nothing crazy about wanting the new aristocrats
of the mind to shepherd everyone else. Noblesse
oblige is a venerable concept. The lords care for the peasants, the king
cares for the lords, God cares for the king. But that is not our arrangement.
Our forebears embraced the Enlightenment. They began with the assumption that citizens
are autonomous. They got suspicious whenever the holders of political power
started trying to tell those citizens what they can and cannot do.
Algorithms might one day expose, and play on, our
innate lack of free will so much that serious legal and societal adjustments
are needed. That, however, is a remote and hypothetical issue, one likely to fall
on a generation, yet unborn, who will smile in their turn at our qualms.
(Before you place much weight on more dramatic predictions, consider that the great
Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)
The question today is more mundane: do voters crave
moral direction from their betters? Are they clamoring to be viewed as lowly
creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly
capable of debasing themselves accordingly through their choice of political representatives.
Judging from Congress’s flat response to Hawley’s bill, the electorate is not
quite there yet.
In the meantime, the great and the good might reevaluate
their campaign to infantilize their less fortunate brothers and sisters.
Lecturing people about how helpless they are is not deep. It’s not cool. It’s
condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned
In 1816 The
Times of London warned “every parent against exposing his daughter to so
fatal a contagion” as . . . the waltz. “The novelty is one deserving
of severe reprobation,” Britain’s paper of record intoned, “and we trust it
will never again be tolerated in any moral English society.”
There was a time, Lord Macaulay felt sure, when
some brahmin or other looked down his nose at the plough and the alphabet.
In March of this year, Elizabeth Warren announced her proposal to break up Big Tech in a blog post on Medium. She tried to paint the tech giants as dominant players crushing their smaller competitors and strangling the open internet. This line in particular stood out: “More than70% of all Internet traffic goes through sites owned or operated by Google or Facebook.”
This statistic immediately struck me as outlandish, but I knew I would need to do some digging to fact check it. After seeing the claim repeated in a recent profile of the Open Markets Institute — “Google and Facebook control websites that receive 70 percent of all internet traffic” — I decided to track down the original source for this surprising finding.
Warren’s blog post links to a November 2017 Newsweek article — “Who Controls the Internet? Facebook and Google Dominance Could Cause the ‘Death of the Web’” — written by Anthony Cuthbertson. The piece is even more alarmist than Warren’s blog post: “Facebook and Google now have direct influence over nearly three quarters of all internet traffic, prompting warnings that the end of a free and open web is imminent.”
The Newsweek article, in turn, cites an October 2017 blog post by André Staltz, an open source freelancer, on his personal website titled “The Web began dying in 2014, here’s how”. His takeaway is equally dire: “It looks like nothing changed since 2014, but GOOG and FB now have direct influence over 70%+ of internet traffic.” Staltz claims the blog post took “months of research to write”, but the headline statistic is merely aggregated from a December 2015 blog post by Parse.ly, a web analytics and content optimization software company.
The Parse.ly article — “Facebook Continues to Beat Google in Sending Traffic to Top Publishers” — is about external referrals (i.e., outside links) to publisher sites (not total internet traffic) and says the “data set used for this study included around 400 publisher domains.” This is not even a random sample much less a comprehensive measure of total internet traffic. Here’s how they summarize their results: “Today, Facebook remains a top referring site to the publishers in Parse.ly’s network, claiming 39 percent of referral traffic versus Google’s share of 34 percent.”
So, using the sources provided by the respective authors, the claim from Elizabeth Warren that “more than 70% of all Internet traffic goes through sites owned or operated by Google or Facebook” can be more accurately rewritten as “more than 70 percent of external links to 400 publishers come from sites owned or operated by Google and Facebook.” When framed that way, it’s much less conclusive (and much less scary).
But what’s the real statistic for total internet traffic? This is a surprisingly difficult question to answer, because there is no single way to measure it: Are we talking about share of users, or user-minutes, of bits, or total visits, or unique visits, or referrals? According to Wikipedia, “Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.”
One of the more comprehensive efforts to answer this question is undertaken annually by Sandvine. The networking equipment company uses its vast installed footprint of equipment across the internet to generate statistics on connections, upstream traffic, downstream traffic, and total internet traffic (summarized in the table below). This dataset covers both browser-based and app-based internet traffic, which is crucial for capturing the full picture of internet user behavior.
Looking at two categories of traffic analyzed by Sandvine — downstream traffic and overall traffic — gives lie to the narrative pushed by Warren and others. As you can see in the chart below, HTTP media streaming — a category for smaller streaming services that Sandvine has not yet tracked individually — represented 12.8% of global downstream traffic and Netflix accounted for 12.6%. According to Sandvine, “the aggregate volume of the long tail is actually greater than the largest of the short-tail providers.” So much for the open internet being smothered by the tech giants.
As for Google and Facebook? The report found that Google-operated sites receive 12.00 percent of total internet traffic while Facebook-controlled sites receive 7.79 percent. In other words, less than 20 percent of all Internet traffic goes through sites owned or operated by Google or Facebook. While this statistic may be less eye-popping than the one trumpeted by Warren and other antitrust activists, it does have the virtue of being true.