Archives For fake news

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Peter Klein (Professor of Entrepreneurship, Baylor University).
]

Nicolas Petit’s insightful and provocative book ends with a chapter on “Big Tech’s Novel Harms,” asking whether antitrust is the appropriate remedy for popular (and academic) concerns about privacy, fake news, and hate speech. In each case, he asks whether the alleged harms are caused by a lack of competition among platforms – which could support a case for breaking them up – or by the nature of the underlying technologies and business models. He concludes that these problems are not alleviated (and may even be exacerbated) by applying competition policy and suggests that regulation, not antitrust, is the more appropriate tool for protecting privacy and truth.

What kind of regulation? Treating digital platforms like public utilities won’t work, Petit argues, because the product is multidimensional and competition takes place on multiple margins (the larger theme of the book): “there is a plausible chance that increased competition in digital markets will lead to a race to the bottom, in which price competition (e.g., on ad markets) will be the winner, and non-price competition (e.g., on privacy) will be the loser.” Utilities regulation also provides incentives for rent-seeking by less efficient rivals. Retail regulation, aimed at protecting small firms, may end up helping incumbents instead by raising rivals’ costs.

Petit concludes that consumer protection regulation (such as Europe’s GDPR) is a better tool for guarding privacy and truth, though it poses challenges as well. More generally, he highlights the vast gulf between the economic analysis of privacy and speech and the increasingly loud calls for breaking up the big tech platforms, which would do little to alleviate these problems.

As in the rest of the book, Petit’s treatment of these complex issues is thoughtful, careful, and systematic. I have more fundamental problems with conventional antitrust remedies and think that consumer protection is problematic when applied to data services (even more so than in other cases). Inspired by this chapter, let me offer some additional thoughts on privacy and the nature of data which speak to regulation of digital platforms and services.

First, privacy, like information, is not an economic good. Just as we don’t buy and sell information per se but information goods (books, movies, communications infrastructure, consultants, training programs, etc.), we likewise don’t produce and consume privacy but what we might call privacy goods: sunglasses, disguises, locks, window shades, land, fences and, in the digital realm, encryption software, cookie blockers, data scramblers, and so on.

Privacy goods and services can be analyzed just like other economic goods. Entrepreneurs offer bundled services that come with varying degrees of privacy protection: encrypted or regular emails, chats, voice and video calls; browsers that block cookies or don’t; social media sites, search engines, etc. that store information or not; and so on. Most consumers seem unwilling to sacrifice other functionality for increased privacy, as suggested by the small market shares held by DuckDuckGo, Telegram, Tor, and the like suggest. Moreover, while privacy per se is appealing, there are huge efficiency gains from matching on buyer and seller characteristics on sharing platforms, digital marketplaces, and dating sites. There are also substantial cost savings from electronic storage and sharing of private information such as medical records and credit histories. And there is little evidence of sellers exploiting such information to engage in price discrimination. (Aquisti, Taylor, and Wagman, 2016 provide a detailed discussion of many of these issues.)

Regulating markets for privacy goods via bans on third-party access to customer data, mandatory data portability, and stiff penalties for data breaches is tricky. Such policies could make digital services more valuable, but it is not obvious why the market cannot figure this out. If consumers are willing to pay for additional privacy, entrepreneurs will be eager to supply it. Of course, bans on third-party access and other forms of sharing would require a fundamental change in the ad-based revenue model that makes free or low-cost access possible, so platforms would have to devise other means of monetizing their services. (Again, many platforms already offer ad-free subscriptions, so it’s unclear why those who prefer ad-based, free usage should be prevented from doing so.)

What about the idea that I own “my” data and that, therefore, I should have full control over how it is used? Some of the utilities-based regulatory models treat platforms as neutral storage places or conduits for information belonging to users. Proposals for data portability suggest that users of technology platforms should be able to move their data from platform to platform, downloading all their personal information from one platform then uploading it to another, then enjoying the same functionality on the new platform as longtime users.

Of course, there are substantial technical obstacles to such proposals. Data would have to be stored in a universal format – not just the text or media users upload to platforms, but also records of all interactions (likes, shares, comments), the search and usage patterns of users, and any other data generated as a result of the user’s actions and interactions with other users, advertisers, and the platform itself. It is unlikely that any universal format could capture this information in a form that could be transferred from one platform to another without a substantial loss of functionality, particularly for platforms that use algorithms to determine how information is presented to users based on past use. (The extreme case is a platform like TikTok which uses usage patterns as a substitute for follows, likes, and shares, portability to construct a “feed.”)

Moreover, as each platform sets its own rules for what information is allowed, the import functionality would have to screen the data for information allowed on the original platform but not the new (and the reverse would be impossible – a user switching from Twitter to Gab, for instance, would have no way to add the content that would have been permitted on Gab but was never created in the first place because it would have violated Twitter rules).

There is a deeper, philosophical issue at stake, however. Portability and neutrality proposals take for granted that users own “their” data. Users create data, either by themselves or with their friends and contacts, and the platform stores and displays the data, just as a safe deposit box holds documents or jewelry and a display case shows of an art collection. I should be able to remove my items from the safe deposit box and take them home or to another bank, and a “neutral” display case operator should not prevent me from showing off my preferred art (perhaps subject to some general rules about obscenity or harmful personal information).

These analogies do not hold for user-generated information on internet platforms, however. “My data” is a record of all my interactions with platforms, with other users on those platforms, with contractual partners of those platforms, and so on. It is co-created by these interactions. I don’t own these records any more than I “own” the fact that someone saw me in the grocery store yesterday buying apples. Of course, if I have a contract with the grocer that says he will keep my purchase records private, and he shares them with someone else, then I can sue him for breach of contract. But this isn’t theft. He hasn’t “stolen” anything; there is nothing for him to steal. If a grocer — or an owner of a tech platform — wants to attract my business by monetizing the records of our interactions and giving me a cut, he should go for it. I still might prefer another store. In any case, I don’t have the legal right to demand this revenue stream.

Likewise, “privacy” refers to what other people know about me – it is knowledge in their heads, not mine. Information isn’t property. If I know something about you, that knowledge is in my head; it’s not something I took from you. Of course, if I obtained or used that info in violation of a prior agreement, then I’m guilty of breach, and I use that information to threaten or harass you, I may be guilty of other crimes. But the popular idea that tech companies are stealing and profiting from something that’s “ours” isn’t right.

The concept of co-creation is important, because these digital records, like other co-created assets, can be more or less relationship specific. The late Oliver Williamson devoted his career to exploring the rich variety of contractual relationships devised by market participants to solve complex contracting problems, particularly in the face of asset specificity. Relationship-specific investments can be difficult for trading parties to manage, but they typically create more value. A legal regime in which only general-purpose, easily redeployable technologies were permitted would alleviate the holdup problem, but at the cost of a huge loss in efficiency. Likewise, a world in which all digital records must be fully portable reduces switching costs, but results in technologies for creating, storing, and sharing information that are less valuable. Why would platform operators invest in efficiency improvements if they cannot capture some of that value by means of proprietary formats, interfaces, sharing rules, and other arrangements?  

In short, we should not be quick to assume “market failure” in the market for privacy goods (or “true” news, whatever that is). Entrepreneurs operating in a competitive environment – not the static, partial-equilibrium notion of competition from intermediate micro texts but the rich, dynamic, complex, and multimarket kind of competition described in Petit’s book – can provide the levels of privacy and truthiness that consumers prefer.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

In the spring of 1669 a “flying coach” transported six passengers from Oxford to London in a single day. Within a few years similar carriage services connected many major towns to the capital.

“As usual,” Lord Macaulay wrote in his history of England, “many persons” were “disposed to clamour against the innovation, simply because it was an innovation.” They objected that the express rides would corrupt traditional horsemanship, throw saddlers and boatmen out of work, bankrupt the roadside taverns, and force travelers to sit with children and the disabled. “It was gravely recommended,” reported Macaulay, by various towns and companies, that “no public coach should be permitted to have more than four horses, to start oftener that once a week, or to go more than thirty miles a day.”

Macaulay used the episode to offer his contemporaries a warning. Although “we smile at these things,” he said, “our descendants, when they read the history of the opposition offered by cupidity and prejudice to the improvements of the nineteenth century, may smile in their turn.” Macaulay wanted the smart set to take a wider view of history.

They rarely do. It is not in their nature. As Schumpeter understood, the “intellectual group” cannot help attacking “the foundations of capitalist society.” “It lives on criticism and its whole position depends on criticism that stings.”

An aspiring intellectual would do well to avoid restraint or good cheer. Better to build on a foundation of panic and indignation. Want to sell books and appear on television? Announce the “death” of this or a “crisis” over that. Want to seem fashionable among other writers, artists, and academics? Denounce greed and rail against “the system.”

New technology is always a good target. When a lantern inventor obtained a patent to light London, observed Macaulay, “the cause of darkness was not left undefended.” The learned technophobes have been especially vexed lately. The largest tech companies, they protest, are manipulating us.

Facebook, The New Republic declares, “remade the internet in its hideous image.” The New Yorker wonders whether the platform is going to “break democracy.”

Apple is no better. “Have smartphones destroyed a generation?” asks The Atlantic in a cover-story headline. The article’s author, Jean Twenge, says smartphones have made the young less independent, more reclusive, and more depressed. She claims that today’s teens are “on the brink of the worst mental-health”—wait for it—“crisis in decades.” “Much of this deterioration,” she contends, “can be traced to their phones.”

And then there’s Amazon. It’s too efficient. Alex Salkever worries in Fortune that “too many clicks, too much time spent, and too much money spent on Amazon” is “bad for our collective financial, psychological, and physical health.”

Here’s a rule of thumb for the refined cultural critic to ponder. When the talking points you use to convey your depth and perspicacity match those of a sermonizing Republican senator, start worrying that your pseudo-profound TED-Talk-y concerns for social justice are actually just fusty get-off-my-lawn fears of novelty and change.

Enter Josh Hawley, freshman GOP senator from Missouri. Hawley claims that Facebook is a “digital drug” that “dulls” attention spans and “frays” relationships. He speculates about whether social media is causing teenage girls to attempt suicide. “What passes for innovation by Big Tech today,” he insists, is “ever more sophisticated exploitation of people.” He scolds the tech companies for failing to produce products that—in his judgment—“enrich lives” and “strengthen society.”

As for the stuff the industry does make, Hawley wants it changed. He has introduced a bill to ban infinite scrolling, music and video autoplay, and the use of “badges and other awards” (gamification) on social media. The bill also requires defaults that limit a user’s time on a platform to 30 minutes a day. A user could opt out of this restriction, but only for a month at a stretch.

The available evidence does not bear out the notion that highbrow magazines, let alone Josh Hawley, should redesign tech products and police how people use their time. You’d probably have to pay someone around $500 to stay off Facebook for a year. Getting her to forego using Amazon would cost even more. And Google is worth more still—perhaps thousands of dollars per user per year. These figures are of course quite rough, but that just proves the point: the consumer surplus created by the internet is inestimable.

Is technology making teenagers sad? Probably not. A recent study tracked the social-media use, along with the wellbeing, of around ten-thousand British children for almost a decade. “In more than half of the thousands of statistical models we tested,” the study’s authors write, “we found nothing more than random statistical noise.” Although there were some small links between teenage girls’ mood and their social-media use, the connections were “miniscule” and too “trivial” to “inform personal parenting decisions.” “It’s probably best,” the researchers conclude, “to retire the idea that the amount of time teens spend on social media is a meaningful metric influencing their wellbeing.”

One could head the other way, in fact, and argue that technology is making children smarter. Surfing the web and playing video games might broaden their attention spans and improve their abstract thinking.

Is Facebook a threat to democracy? Not yet. The memes that Russian trolls distributed during the 2016 election were clumsy, garish, illiterate piffle. Most of it was the kind of thing that only an Alex Jones fan or a QAnon conspiracist would take seriously. And sure enough, one study finds that only a tiny fraction of voters, most of them older conservatives, read and spread the material. It appears, in other words, that the Russian fake news and propaganda just bounced around among a few wingnuts whose support for Donald Trump was never in doubt.

Over time, it is fair to say, the known costs and benefits of the latest technological innovations could change. New data and further study might reveal that the handwringers are on to something. But there’s good news: if you have fears, doubts, or objections, nothing stops you from acting on them. If you believe that Facebook’s behavior is intolerable, or that its impact on society is malign, stop using it. If you think Amazon is undermining small businesses, shop more at local stores. If you fret about your kid’s screen time, don’t give her a smartphone. Indeed, if you suspect that everything has gone pear-shaped since the Industrial Revolution started, throw out your refrigerator and stop going to the dentist.

We now hit the crux of the intellectuals’ (and Josh Hawley’s) complaint. It’s not a gripe about Big Tech so much as a gripe about you. You, the average person, are too dim, weak, and base. You lack the wits to use an iPhone on your own terms. You lack the self-control to post, “like”, and share in moderation (or the discipline to make your children follow suit). You lack the virtue to abstain from the pleasures of Prime-membership consumerism.

One AI researcher digs to the root. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or ‘I’m not on social media,’” she tells Vox. No one wields the “privilege” epithet quite like the modern privileged do. It is one of the remarkable features of our time. Pundits and professors use the word to announce, albeit unintentionally, that only they and their peers have any agency. Those other people, meanwhile, need protection from too much information, too much choice, too much freedom.

There’s nothing crazy about wanting the new aristocrats of the mind to shepherd everyone else. Noblesse oblige is a venerable concept. The lords care for the peasants, the king cares for the lords, God cares for the king. But that is not our arrangement. Our forebears embraced the Enlightenment. They began with the assumption that citizens are autonomous. They got suspicious whenever the holders of political power started trying to tell those citizens what they can and cannot do.

Algorithms might one day expose, and play on, our innate lack of free will so much that serious legal and societal adjustments are needed. That, however, is a remote and hypothetical issue, one likely to fall on a generation, yet unborn, who will smile in their turn at our qualms. (Before you place much weight on more dramatic predictions, consider that the great Herbert Simon asserted, in 1965, that we’d have general AI by 1985.)

The question today is more mundane: do voters crave moral direction from their betters? Are they clamoring to be viewed as lowly creatures who can hardly be relied on to tie their shoes? If so, they’re perfectly capable of debasing themselves accordingly through their choice of political representatives. Judging from Congress’s flat response to Hawley’s bill, the electorate is not quite there yet.

In the meantime, the great and the good might reevaluate their campaign to infantilize their less fortunate brothers and sisters. Lecturing people about how helpless they are is not deep. It’s not cool. It’s condescending and demeaning. It’s a form of trolling. Above all, it’s old-fashioned and priggish.

In 1816 The Times of London warned “every parent against exposing his daughter to so fatal a contagion” as . . . the waltz. “The novelty is one deserving of severe reprobation,” Britain’s paper of record intoned, “and we trust it will never again be tolerated in any moral English society.”

There was a time, Lord Macaulay felt sure, when some brahmin or other looked down his nose at the plough and the alphabet.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

More than a century of bad news

Bill Gates recently tweeted the image below, commenting that he is “always amazed by the disconnect between what we see in the news and the reality of the world around us.”

https://pbs.twimg.com/media/D8zWfENUYAAvK5I.png

Of course, this chart and Gates’s observation are nothing new – there has long been an accuracy gap between what the news covers (and therefore what Americans believe is important) and what is actually important. As discussed in one academic article on the subject:

The line between journalism and entertainment is dissolving even within traditional news formats. [One] NBC executive [] decreed that every news story should “display the attributes of fiction, of drama. It should have structure and conflict, problem and denouement, rising action and falling action, a beginning, a middle and an end.” … This has happened both in broadcast and print journalism. … Roger Ailes … explains this phenomenon with an Orchestra Pit Theory: “If you have two guys on a stage and one guy says, ‘I have a solution to the Middle East problem,’ and the other guy falls in the orchestra pit, who do you think is going to be on the evening news?”

Matters of policy get increasingly short shrift. In 1968, the network newscasts generally showed presidential candidates speaking, and on the average a candidate was shown speaking uninterrupted for forty-two seconds. Over the next twenty years, these sound bites had shrunk to an average of less than ten seconds. This phenomenon is by no means unique to broadcast journalism; there has been a parallel decline in substance in print journalism as well. …

The fusing of news and entertainment is not accidental. “I make no bones about it—we have to be entertaining because we compete with entertainment options as well as other news stories,” says the general manager of a Florida TV station that is famous, or infamous, for boosting the ratings of local newscasts through a relentless focus on stories involving crime and calamity, all of which are presented in a hyperdramatic tone (the so-called “If It Bleeds, It Leads” format). There was a time when news programs were content to compete with other news programs, and networks did not expect news divisions to be profit centers, but those days are over.

That excerpt feels like it could have been written today. It was not: it was published in 1996. The “if it bleeds, it leads” trope is often attributed to a 1989 New York magazine article – and once introduced into the popular vernacular it grew quickly in popularity:

Of course, the idea that the media sensationalizes its reporting is not a novel observation. “If it bleeds, it leads” is just the late-20th century term for what had been “sex sells” – and the idea of yellow journalism before then. And, of course, “if it bleeds” is the precursor to our more modern equivalent of “clickbait.”

The debate about how to save the press from Google and Facebook … is the wrong debate to have

We are in the midst of a debate about how to save the press in the digital age. The House Judiciary Committee recently held a hearing on the relationship between online platforms and the press; and the Australian Competition & Consumer Commission recently released a preliminary report on the same topic.

In general, these discussions focus on concerns that advertising dollars have shifted from analog-era media in the 20th century to digital platforms in the 21st century – leaving the traditional media underfunded and unable to do its job. More specifically, competition authorities are being urged (by the press) to look at this through the lens of antitrust, arguing that Google and Facebook are the dominant two digital advertising platforms and have used their market power to harm the traditional media.

I have previously explained that this is bunk; as has John Yun, critiquing current proposals. I won’t rehash those arguments here, beyond noting that traditional media’s revenues have been falling since the advent of the Internet – not since the advent of Google or Facebook. The problem that the traditional media face is not that monopoly platforms are engaging in conduct that is harmful to them – it is that the Internet is better both as an advertising and information-distribution platform such that both advertisers and information consumers have migrated to digital platforms (and away from traditional news media).

This is not to say that digital platforms are capable of, or well-suited to, the production and distribution of the high-quality news and information content that we have historically relied on the traditional media to produce. Yet, contemporary discussions about whether traditional news media can survive in an era where ad revenue accrues primarily to large digital platforms have been surprisingly quiet on the question of the quality of content produced by the traditional media.

Actually, that’s not quite true. First, as indicated by the chart tweeted by Gates, digital platforms may be providing consumers with information that is more relevant to them.

Second, and more important, media advocates argue that without the ad revenue that has been diverted (by advertisers, not by digital platforms) to firms like Google and Facebook they lack the resources to produce high quality content. But that assumes that they would produce high quality content if they had access to those resources. As Gates’s chart – and the last century of news production – demonstrates, that is an ill-supported claim. History suggests that, left to its own devices and not constrained for resources by competition from digital platforms, the traditional media produces significant amounts of clickbait.

It’s all about the Benjamins

Among critics of the digital platforms, there is a line of argument that the advertising-based business model is the original sin of the digital economy. The ad-based business model corrupts digital platforms and turns them against their users – the user, that is, becomes the product in the surveillance capitalism state. We would all be much better off, the argument goes, if the platforms operated under subscription- or micropayment-based business models.

It is noteworthy that press advocates eschew this line of argument. Their beef with the platforms is that they have “stolen” the ad revenue that rightfully belongs to the traditional media. The ad revenue, of course, that is the driver behind clickbait, “if it bleeds it leads,” “sex sells,” and yellow journalism. The original sin of advertising-based business models is not original to digital platforms – theirs is just an evolution of the model perfected by the traditional media.

I am a believer in the importance of the press – and, for that matter, for the efficacy of ad-based business models. But more than a hundred years of experience makes clear that mixing the two into the hybrid bastard that is infotainment should prompt concern and discussion about the business model of the traditional press (and, indeed, for most of the past 30 years or so it has done so).

When it comes to “saving the press” the discussion ought not be about how to restore traditional media to its pre-Facebook glory days of the early aughts, or even its pre-modern Internet gold age of the late 1980s. By that point, the media was well along the slippery slope to where it is today. We desperately need a strong, competitive market for news and information. We should use the crisis that that market currently is in to discuss solutions for the future, not how to preserve the past.

Following is the (slightly expanded and edited) text of my remarks from the panel, Antitrust and the Tech Industry: What Is at Stake?, hosted last Thursday by CCIA. Bruce Hoffman (keynote), Bill Kovacic, Nicolas Petit, and Christine Caffarra also spoke. If we’re lucky Bruce will post his remarks on the FTC website; they were very good.

(NB: Some of these comments were adapted (or lifted outright) from a forthcoming Cato Policy Report cover story co-authored with Gus Hurwitz, so Gus shares some of the credit/blame.)

 

The urge to treat antitrust as a legal Swiss Army knife capable of correcting all manner of social and economic ills is apparently difficult for some to resist. Conflating size with market power, and market power with political power, many recent calls for regulation of industry — and the tech industry in particular — are framed in antitrust terms. Take Senator Elizabeth Warren, for example:

[T]oday, in America, competition is dying. Consolidation and concentration are on the rise in sector after sector. Concentration threatens our markets, threatens our economy, and threatens our democracy.

And she is not alone. A growing chorus of advocates are now calling for invasive, “public-utility-style” regulation or even the dissolution of some of the world’s most innovative companies essentially because they are “too big.”

According to critics, these firms impose all manner of alleged harms — from fake news, to the demise of local retail, to low wages, to the veritable destruction of democracy — because of their size. What is needed, they say, is industrial policy that shackles large companies or effectively mandates smaller firms in order to keep their economic and political power in check.

But consider the relationship between firm size and political power and democracy.

Say you’re successful in reducing the size of today’s largest tech firms and in deterring the creation of new, very-large firms: What effect might we expect this to have on their political power and influence?

For the critics, the effect is obvious: A re-balancing of wealth and thus the reduction of political influence away from Silicon Valley oligarchs and toward the middle class — the “rudder that steers American democracy on an even keel.”

But consider a few (and this is by no means all) countervailing points:

To begin, at the margin, if you limit firm growth as a means of competing with rivals, you make correspondingly more important competition through political influence. Erecting barriers to entry and raising rivals’ costs through regulation are time-honored American political traditions, and rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration.

Next, by imbuing antitrust with an ill-defined set of vague political objectives, you also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so.

And finally, if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? All of a sudden the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might even find, again, that we end up with even more concentration because the exceptions could subsume the rules.

All of which of course highlights the fundamental, underlying problem: If you make antitrust more political, you’ll get less democratic, more politically determined, results — precisely the opposite of what proponents claim to want.

Then there’s democracy, and calls to break up tech in order to save it. Calls to do so are often made with reference to the original intent of the Sherman Act and Louis Brandeis and his “curse of bigness.” But intentional or not, these are rallying cries for the assertion, not the restraint, of political power.

The Sherman Act’s origin was ambivalent: although it was intended to proscribe business practices that harmed consumers, it was also intended to allow politically-preferred firms to maintain high prices in the face of competition from politically-disfavored businesses.

The years leading up to the adoption of the Sherman Act in 1890 were characterized by dramatic growth in the efficiency-enhancing, high-tech industries of the day. For many, the purpose of the Sherman Act was to stem this growth: to prevent low prices — and, yes, large firms — from “driving out of business the small dealers and worthy men whose lives have been spent therein,” in the words of Trans-Missouri Freight, one of the early Supreme Court decisions applying the Act.

Left to the courts, however, the Sherman Act didn’t quite do the trick. By 1911 (in Standard Oil and American Tobacco) — and reflecting consumers’ preferences for low prices over smaller firms — only “unreasonable” conduct was actionable under the Act. As one of the prime intellectual engineers behind the Clayton Antitrust Act and the Federal Trade Commission in 1914, Brandeis played a significant role in the (partial) legislative and administrative overriding of the judiciary’s excessive support for economic efficiency.

Brandeis was motivated by the belief that firms could become large only by illegitimate means and by deceiving consumers. But Brandeis was no advocate for consumer sovereignty. In fact, consumers, in Brandeis’ view, needed to be saved from themselves because they were, at root, “servile, self-indulgent, indolent, ignorant.”

There’s a lot that today we (many of us, at least) would find anti-democratic in the underpinnings of progressivism in US history: anti-consumerism; racism; elitism; a belief in centrally planned, technocratic oversight of the economy; promotion of social engineering, including through eugenics; etc. The aim of limiting economic power was manifestly about stemming the threat it posed to powerful people’s conception of what political power could do: to mold and shape the country in their image — what economist Thomas Sowell calls “the vision of the anointed.”

That may sound great when it’s your vision being implemented, but today’s populist antitrust resurgence comes while Trump is in the White House. It’s baffling to me that so many would expand and then hand over the means to design the economy and society in their image to antitrust enforcers in the executive branch and presidentially appointed technocrats.

Throughout US history, it is the courts that have often been the bulwark against excessive politicization of the economy, and it was the courts that shepherded the evolution of antitrust away from its politicized roots toward rigorous, economically grounded policy. And it was progressives like Brandeis who worked to take antitrust away from the courts. Now, with efforts like Senator Klobuchar’s merger bill, the “New Brandeisians” want to rein in the courts again — to get them out of the way of efforts to implement their “big is bad” vision.

But the evidence that big is actually bad, least of all on those non-economic dimensions, is thin and contested.

While Zuckerberg is grilled in Congress over perceived, endemic privacy problems, politician after politician and news article after news article rushes to assert that the real problem is Facebook’s size. Yet there is no convincing analysis (maybe no analysis of any sort) that connects its size with the problem, or that evaluates whether the asserted problem would actually be cured by breaking up Facebook.

Barry Lynn claims that the origins of antitrust are in the checks and balances of the Constitution, extended to economic power. But if that’s right, then the consumer welfare standard and the courts are the only things actually restraining the disruption of that order. If there may be gains to be had from tweaking the minutiae of the process of antitrust enforcement and adjudication, by all means we should have a careful, lengthy discussion about those tweaks.

But throwing the whole apparatus under the bus for the sake of an unsubstantiated, neo-Brandeisian conception of what the economy should look like is a terrible idea.

If you do research involving statistical analysis, you’ve heard of John Ioannidis. If you haven’t heard of him, you will. He’s gone after the fields of medicine, psychology, and economics. He may be coming for your field next.

Ioannidis is after bias in research. He is perhaps best known for a 2005 paper “Why Most Published Research Findings Are False.” A professor at Stanford, he has built a career in the field of meta-research and may be one of the most highly cited researchers alive.

In 2017, he published “The Power of Bias in Economics Research.” He recently talked to Russ Roberts on the EconTalk podcast about his research and what it means for economics.

He focuses on two factors that contribute to bias in economics research: publication bias and low power. These are complicated topics. This post hopes to provide a simplified explanation of these issues and why bias and power matters.

What is bias?

We frequently hear the word bias. “Fake news” is biased news. For dinner, I am biased toward steak over chicken. That’s different from statistical bias.

In statistics, bias means that a researcher’s estimate of a variable or effect is different from the “true” value or effect. The “true” probability of getting heads from tossing a fair coin is 50 percent. Let’s say that no matter how many times I toss a particular coin, I find that I’m getting heads about 75 percent of the time. My instrument, the coin, may be biased. I may be the most honest coin flipper, but my experiment has biased results. In other words, biased results do not imply biased research or biased researchers.

Publication bias

Publication bias occurs because peer-reviewed publications tend to favor publishing positive, statistically significant results and to reject insignificant results. Informally, this is known as the “file drawer” problem. Nonsignificant results remain unsubmitted in the researcher’s file drawer or, if submitted, remain in limbo in an editor’s file drawer.

Studies are more likely to be published in peer-reviewed publications if they have statistically significant findings, build on previous published research, and can potentially garner citations for the journal with sensational findings. Studies that don’t have statistically significant findings or don’t build on previous research are less likely to be published.

The importance of “sensational” findings means that ho-hum findings—even if statistically significant—are less likely to be published. For example, research finding that a 10 percent increase in the minimum wage is associated with a one-tenth of 1 percent reduction in employment (i.e., an elasticity of 0.01) would be less likely to be published than a study finding a 3 percent reduction in employment (i.e., elasticity of –0.3).

“Man bites dog” findings—those that are counterintuitive or contradict previously published research—may be less likely to be published. A study finding an upward sloping demand curve is likely to be rejected because economists “know” demand curves slope downward.

On the other hand, man bites dog findings may also be more likely to be published. Card and Krueger’s 1994 study finding that a minimum wage hike was associated with an increase in low-wage workers was published in the top-tier American Economic Review. Had the study been conducted by lesser-known economists, it’s much less likely it would have been accepted for publication. The results were sensational, judging from the attention the article got from the New York Times, the Wall Street Journal, and even the Clinton administration. Sometimes a man does bite a dog.

Low power

A study with low statistical power has a reduced chance of detecting a true effect.

Consider our criminal legal system. We seek to find criminals guilty, while ensuring the innocent go free. Using the language of statistical testing, the presumption of innocence is our null hypothesis. We set a high threshold for our test: Innocent until proven guilty, beyond a reasonable doubt. We hypothesize innocence and only after overcoming our reasonable doubt do we reject that hypothesis.

Type1-Type2-Error

An innocent person found guilty is considered a serious error—a “miscarriage of justice.” The presumption of innocence (null hypothesis) combined with a high burden of proof (beyond a reasonable doubt) are designed to reduce these errors. In statistics, this is known as “Type I” error, or “false positive.” The probability of a Type I error is called alpha, which is set to some arbitrarily low number, like 10 percent, 5 percent, or 1 percent.

Failing to convict a known criminal is also a serious error, but generally agreed it’s less serious than a wrongful conviction. Statistically speaking, this is a “Type II” error or “false negative” and the probability of making a Type II error is beta.

By now, it should be clear there’s a relationship between Type I and Type II errors. If we reduce the chance of a wrongful conviction, we are going to increase the chance of letting some criminals go free. It can be mathematically shown (not here), that a reduction in the probability of Type I error is associated with an increase in Type II error.

Consider O.J. Simpson. Simpson was not found guilty in his criminal trial for murder, but was found liable for the deaths of Nicole Simpson and Ron Goldman in a civil trial. One reason for these different outcomes is the higher burden of proof for a criminal conviction (“beyond a reasonable doubt,” alpha = 1 percent) than for a finding of civil liability (“preponderance of evidence,” alpha = 50 percent). If O.J. truly is guilty of the murders, the criminal trial would have been less likely to find guilt than the civil trial would.

In econometrics, we construct the null hypothesis to be the opposite of what we hypothesize to be the relationship. For example, if we hypothesize that an increase in the minimum wage decreases employment, the null hypothesis would be: “A change in the minimum wage has no impact on employment.” If the research involves regression analysis, the null hypothesis would be: “The estimated coefficient on the elasticity of employment with respect to the minimum wage would be zero.” If we set the probability of Type I error to 5 percent, then regression results with a p-value of less than 0.05 would be sufficient to reject the null hypothesis of no relationship. If we increase the probability of Type I error, we increase the likelihood of finding a relationship, but we also increase the chance of finding a relationship when none exists.

Now, we’re getting to power.

Power is the chance of detecting a true effect. In the legal system, it would be the probability that a truly guilty person is found guilty.

By definition, a low power study has a small chance of discovering a relationship that truly exists. Low power studies produce more false negative than high power studies. If a set of studies have a power of 20 percent, then if we know that there are 100 actual effects, the studies will find only 20 of them. In other words, out of 100 truly guilty suspects, a legal system with a power of 20 percent will find only 20 of them guilty.

Suppose we expect 25 percent of those accused of a crime are truly guilty of the crime. Thus the odds of guilt are R = 0.25 / 0.75 = 0.33. Assume we set alpha to 0.05, and conclude the accused is guilty if our test statistic provides p < 0.05. Using Ioannidis’ formula for positive predictive value, we find:

  • If the power of the test is 20 percent, the probability that a “guilty” verdict reflects true guilt is 57 percent.
  • If the power of the test is 80 percent, the probability that a “guilty” verdict reflects true guilt is 84 percent.

In other words, a low power test is more likely to convict the innocent than a high power test.

In our minimum wage example, a low power study is more likely find a relationship between a change in the minimum wage and employment when no relationship truly exists. By extension, even if a relationship truly exists, a low power study would be more likely to find a bigger impact than a high power study. The figure below demonstrates this phenomenon.

MinimumWageResearchFunnelGraph

Across the 1,424 studies surveyed, the average elasticity with respect to the minimum wage is –0.190 (i.e., a 10 percent increase in the minimum wage would be associated with a 1.9 percent decrease in employment). When adjusted for the studies’ precision, the weighted average elasticity is –0.054. By this simple analysis, the unadjusted average is 3.5 times bigger than the adjusted average. Ioannidis and his coauthors estimate among the 60 studies with “adequate” power, the weighted average elasticity is –0.011.

(By the way, my own unpublished studies of minimum wage impacts at the state level had an estimated short-run elasticity of –0.03 and “precision” of 122 for Oregon and short-run elasticity of –0.048 and “precision” of 259 for Colorado. These results are in line with the more precise studies in the figure above.)

Is economics bogus?

It’s tempting to walk away from this discussion thinking all of econometrics is bogus. Ioannidis himself responds to this temptation:

Although the discipline has gotten a bad rap, economics can be quite reliable and trustworthy. Where evidence is deemed unreliable, we need more investment in the science of economics, not less.

For policymakers, the reliance on economic evidence is even more important, according to Ioannidis:

[P]oliticians rarely use economic science to make decisions and set new laws. Indeed, it is scary how little science informs political choices on a global scale. Those who decide the world’s economic fate typically have a weak scientific background or none at all.

Ioannidis and his colleagues identify several way to address the reliability problems in economics and other fields—social psychology is one of the worst. However these are longer term solutions.

In the short term, researchers and policymakers should view sensational finding with skepticism, especially if those sensational findings support their own biases. That skepticism should begin with one simple question: “What’s the confidence interval?”