Archives For

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Will Rinehart, (Senior Research Fellow, Center for Growth and Opportunity).]

Nellie Bowles, a longtime critic of tech, recently had a change of heart about tech, which she relayed in the New York Times:

Before the coronavirus, there was something I used to worry about. It was called screen time. Perhaps you remember it.

I thought about it. I wrote about it. A lot. I would try different digital detoxes as if they were fad diets, each working for a week or two before I’d be back on that smooth glowing glass.

Now I have thrown off the shackles of screen-time guilt. My television is on. My computer is open. My phone is unlocked, glittering. I want to be covered in screens. If I had a virtual reality headset nearby, I would strap it on.

Bowles isn’t alone. The Washington Post recently documented how social distancing has caused people to “rethink of one of the great villains of modern technology: screens.” Matthew Yglesias of Vox has been critical of tech in the past as well, but recently admitted that these tools are “making our lives much better.” Cal Newport might have called for Twitter to be shut down, but now thinks the service can be useful. These anecdotes speak to a larger trend. According to one national poll, some 88 percent of Americans now have a better appreciation for technology since this pandemic has forced them to rely upon it. 

Before COVID-19, catchy headlines like “Heavy Social Media Use Linked With Mental Health Issues In Teens” and “Have Smartphones Destroyed a Generation?” were met with nods and approvals. These concerns found backing in legislation like Senator Josh Hawley’s “Social Media Addiction Reduction Technology Act” or SMART Act. The opening lines of the SMART Act make it clear the legislation would “prohibit social media companies from using practices that exploit human psychology or brain physiology to substantially impede freedom of choice, [and] to require social media companies to take measures to mitigate the risks of internet addiction and psychological exploitation.”  

Most psychologists steer clear of using the term addiction because it means a person engages in hazardous use, shows tolerance, and neglects social roles. Because social media, gaming, and cell phone use don’t meet this threshold, the profession tends to describe those who experience negative impacts as engaging in problematic use of the tech, which is only applied to a small minority. According to one estimate, for example, only half of a percent of gamers have patterns of problematic use. 

Even though tech use doesn’t meet the criteria for addiction, the term addiction finds purchase in policy discussions and media outlets because it suggests a healthier norm. Computer games have prosocial benefits, yet it is common to hear that the activity is no match for going outside to play. The same kind of argument exists with social media and phone use; face-to-face communication is preferred to tech-enabled communication. 

But the coronavirus has inverted the normal conditions. Social distancing doesn’t allow us to connect in person or play outside with friends. Faced with no other alternative, technology has been embraced. Videoconferencing is up, as is social media use. This new norm has  brought with it a needed rethink of critiques of tech. Even before this moment, however, the research on tech effects has had its problems.    

To begin, even though it has been researched extensively, screen time and social media use aren’t shown to clearly cause harm. Earlier this year, psychologists Candice Odgers and Michaeline Jensen conducted a massive literature review and summarized the research as “a mix of often conflicting small positive, negative and null associations.” The researchers also point out that studies finding a negative relationship between well-being and tech use tend to be correlational, not causational, and thus are “unlikely to be of clinical or practical significance” to parents or therapists.  

Through no fault of their own, researchers tend to focus a limited number of relationships when it comes to tech use. But professors Amy Orben and Andrew Przybylski were able to sidestep these problems by getting computers to test every theoretically defensible hypothesis. In a writeup appropriately titled “Beyond Cherry-Picking,” the duo explained why this method is important to policy makers:

Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context.  In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.

Their academic paper throws cold water on the screen time and tech use debate. Since social media explains only 0.4% of the variation in well-being, much greater welfare gains can be made by concentrating on other policy issues. For example, regularly eating breakfast, getting enough sleep, and avoiding marijuana use play much larger roles in the well-being of adolescents. Social media is only a tiny portion of what determines well-being as the chart below helps to illustrate. 

Second, most social media research relies on self-reporting methods, which are systematically biased and often unreliable. Communication professor Michael Scharkow, for example, compared self-reports of Internet use with the computer log files, which show everything that a computer has done and when, and found that “survey data are only moderately correlated with log file data.” A quartet of psychology professors in the UK discovered that self-reported smartphone use and social media addiction scales face similar problems in that they don’t correctly capture reality. Patrick Markey, Professor and Director of the IR Laboratory at Villanova University, summarized the work, “the fear of smartphones and social media was built on a castle made of sand.”  

Expert bodies have also been changing their tune as well. The American Academy of Pediatrics took a hardline stance for years, preaching digital abstinence. But the organization has since backpedaled and now says that screens are fine in moderation. The organization now suggests that parents and children should work together to create boundaries. 

Once this pandemic is behind us, policymakers and experts should reconsider the screen time debate. We need to move from loaded terms like addiction and embrace a more realistic model of the world. The truth is that everyone’s relationship with technology is complicated. Instead of paternalistic legislation, leaders should place the onus on parents and individuals to figure out what is right for them.      

In mid-November, the 50 state attorneys general (AGs) investigating Google’s advertising practices expanded their antitrust probe to include the company’s search and Android businesses. Texas Attorney General Ken Paxton, the lead on the case, was supportive of the development, but made clear that other states would manage the investigations of search and Android separately. While attorneys might see the benefit in splitting up search and advertising investigations, platforms like Google need to be understood as a coherent whole. If the state AGs case is truly concerned with the overall impact on the welfare of consumers, it will need to be firmly grounded in the unique economics of this platform.

Back in September, 50 state AGs, including those in Washington, DC and Puerto Rico, announced an investigation into Google. In opening the case, Paxton said that, “There is nothing wrong with a business becoming the biggest game in town if it does so through free market competition, but we have seen evidence that Google’s business practices may have undermined consumer choice, stifled innovation, violated users’ privacy, and put Google in control of the flow and dissemination of online information.” While the original document demands focused on Google’s “overarching control of online advertising markets and search traffic,” reports since then suggest that the primary investigation centers on online advertising.

Defining the market

Since the market definition is the first and arguably the most important step in an antitrust case, Paxton has tipped his hand and shown that the investigation is converging on the online ad market. Yet, he faltered when he wrote in The Wall Street Journal that, “Each year more than 90% of Google’s $117 billion in revenue comes from online advertising. For reference, the entire market for online advertising is around $130 billion annually.” As Patrick Hedger of the Competitive Enterprise Institute was quick to note, Paxton cited global revenue numbers and domestic advertising statistics. In reality, Google’s share of the online advertising market in the United States is 37 percent and is widely expected to fall.

When Google faced scrutiny by the Federal Trade Commission in 2013, the leaked staff report explained that “the Commission and the Department of Justice have previously found online ‘search advertising’ to be a distinct product market.” This finding, which dates from 2007, simply wouldn’t stand today. Facebook’s ad platform was launched in 2007 and has grown to become a major competitor to Google. Even more recently, Amazon has jumped into the space and independent platforms like Telaria, Rubicon Project, and The Trade Desk have all made inroads. In contrast to the late 2000s, advertisers now use about four different online ad platforms.

Moreover, the relationship between ad prices and industry concentration is complicated. In traditional economic analysis, fewer suppliers of a product generally translates into higher prices. In the online ad market, however, fewer advertisers means that ad buyers can efficiently target people through keywords. Because advertisers have access to superior information, research finds that more concentration tends to lead to lower search engine revenues. 

The addition of new fronts in the state AGs’ investigation could spell disaster for consumers. While search and advertising are distinct markets, it is the act of tying the two together that makes platforms like Google valuable to users and advertisers alike. Demand is tightly integrated between the two sides of the platform. Changes in user and advertiser preferences have far outsized effects on the overall platform value because each side responds to the other. If users experience an increase in price or a reduction in quality, then they will use the platform less or just log off completely. Advertisers see this change in users and react by reducing their demand for ad placements as well. When advertisers drop out, the total amount of content also recedes and users react once again. Economists call these relationships demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazines, newspapers, and social media sites all support the existence of demand interdependencies. 

Economists David Evans and Richard Schmalensee, who were cited extensively in the Supreme Court case Ohio v. American Express, explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments. Understanding these relationships makes the investigation all that more difficult.

The limits of remedies

Most likely, this current investigation will follow the trajectory of Microsoft in the 1990s when states did the legwork for a larger case brought by the Department of Justice (DoJ). The DoJ already has its own investigation into Google and will probably pull together all of the parties for one large suit. Google is also subject to a probe by the House of Representatives Judiciary Committee as well. What is certain is that Google will be saddled with years of regulatory scrutiny, but what remains unclear is what kind of changes the AGs are after.

The investigation might aim to secure behavioral changes, but these often come with a cost in platform industries. The European Commission, for example, got Google to change its practices with its Android operating system for mobile phones. Much like search and advertising, the Android ecosystem is a platform with cross subsidization and demand interdependencies between the various sides of the market. Because the company was ordered to stop tying the Android operating system to apps, manufacturers of phones and tablets now have to pay a licensing fee in Europe if they want Google’s apps and the Play Store. Remedies meant to change one side of the platform resulted in those relationships being unbundled. When regulators force cross subsidization to become explicit prices, consumers are the one who pay.

The absolute worst case scenario would be a break up of Google, which has been a centerpiece of Senator Elizabeth Warren’s presidential platform. As I explained last year, that would be a death warrant for the company:

[T]he value of both Facebook and Google comes in creating the platform, which combines users with advertisers. Before the integration of ad networks, the search engine industry was struggling and it was simply not a major player in the Internet ecosystem. In short, the search engines, while convenient, had no economic value. As Michael Moritz, a major investor of Google, said of those early years, “We really couldn’t figure out the business model. There was a period where things were looking pretty bleak.” But Google didn’t pave the way. Rather, Bill Gross at GoTo.com succeeded in showing everyone how advertising could work to build a business. Google founders Larry Page and Sergey Brin merely adopted the model in 2002 and by the end of the year, the company was profitable for the first time. Marrying the two sides of the platform created value. Tearing them apart will also destroy value.

The state AGs need to resist making this investigation into a political showcase. As Pew noted in documenting the rise of North Carolina Attorney General Josh Stein to national prominence, “What used to be a relatively high-profile position within a state’s boundaries has become a springboard for publicity across the country.” While some might cheer the opening of this investigation, consumer welfare needs to be front and center. To properly understand how consumer welfare might be impacted by an investigation, the state AGs need to take seriously the path already laid out by platform economics. For the sake of consumers, let’s hope they are up to the task. 

[This post is the fifth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by William Rinehart, Director of Technology and Innovation Policy at American Action Forum.]

Back in May, the New York Times published an op-ed by Chris Hughes, one of the founders of Facebook, in which he called for the break up of his former firm. Hughes joins a growing chorus, including Senator Warren, Roger McNamee and others who have called for the break up of “Big Tech” companies. If Business Insider’s polling is correct, this chorus seems to be quite effective: Nearly 40 percent of Americans now support breaking up Facebook. 

Hughes’ position is perhaps understandable given his other advocacy activities. But it is also worth bearing in mind that he likely was never particularly familiar with or involved in Facebook’s technical backend or business development or sales. Rather, he was important in setting up the public relations and feedback mechanisms. This is relevant because the technical and organizational challenges in breaking up big tech are enormous and underappreciated. 

The Technics of Structural Remedies

As I explained at AAF last year,

Any trust-busting action would also require breaking up the company’s technology stack — a general name for the suite of technologies powering web sites. For example, Facebook developed its technology stack in-house to address the unique problems facing Facebook’s vast troves of data. Facebook created BigPipe to dynamically serve pages faster, Haystack to store billions of photos efficiently, Unicorn for searching the social graph, TAO for storing graph information, Peregrine for querying, and MysteryMachine to help with end-to-end performance analysis. The company also invested billions in data centers to quickly deliver video, and it split the cost of an undersea cable with Microsoft to speed up information travel. Where do you cut these technologies when splitting up the company?

That list, however, leaves out the company’s backend AI platform, known as Horizon. As Christopher Mims reported in the Wall Street Journal, Facebook put serious resources into creating Horizon and it has paid off. About a fourth of the engineers at the company were using this platform in 2017, even though only 30 percent of them were experts in it. The system, as Joaquin Candela explained, is powerful because it was built to be “a very modular layered cake where you can plug in at any level you want.” As Mim was careful to explain, the platform was designed to be “domain-specific,”  or highly modular. In other words, Horizon was meant to be useful across a range of complex problems and different domains. If WhatsApp and Instagram were separated from Facebook, who gets that asset? Does Facebook retain the core tech and then have to sell it at a regulated rate?

Lessons from Attempts to Manage Competition in the Tobacco Industry 

For all of the talk about breaking up Facebook and other tech companies, few really grasp just how lackluster this remedy has been in the past. The classic case to study isn’t AT&T or Standard Oil, but American Tobacco Company

The American Tobacco Company came about after a series of mergers in 1890 orchestrated by J.B. Duke. Then, between 1907 and 1911, the federal government filed and eventually won an antitrust lawsuit, which dissolved the trust into three companies. 

Duke was unique for his time because he worked to merge all of the previous companies into a working coherent firm. The organization that stood trial in 1907 was a modern company, organized around a functional structure. A single purchasing department managed all the leaf purchasing. Tobacco processing plants were dedicated to specific products without any concern for their previous ownership. The American Tobacco Company was rational in a way few other companies were at the time.  

These divisions were pulled apart over eight months. Factories, distribution and storage facilities, back offices and name brands were all separated by government fiat. It was a difficult task. As historian Allan M. Brandt details in “The Cigarette Century,”

It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.

So how did consumers fare after the breakup? Most research suggests that the breakup didn’t substantially change the markets where American Tobacco was involved. Real cigarette prices for consumers were stable, suggesting there wasn’t price competition. The three companies coming out of the suit earned the same profit from 1912 to 1949 as the original American Tobacco Company Trust earned in its heyday from 1898 to 1908. As for the upstream suppliers, the price paid to tobacco farmers didn’t change either. The breakup was a bust.  

The difficulties in breaking up American Tobacco stand in contrast to the methods employed with Standard Oil and AT&T. For them, the split was made along geographic lines. Standard Oil was broken into 34 regional companies. Standard Oil of New Jersey became Exxon, while Standard Oil of California changed its name to Chevron. In the same way, AT&T was broken up in Regional Bell Operating Companies. Facebook doesn’t have geographic lines.

The Lessons of the Past Applied to Facebook

Facebook combines elements of the two primary firm structures and is thus considered a “matrix form” company. While the American Tobacco Company employed a functional organization, the most common form of company organization today is the divisional form. This method of firm rationalization separates the company’s operational functions by product, in order to optimize efficiencies. Under a divisional structure, each product is essentially a company unto itself. Engineering, finance, sales, and customer service are all unified within one division, which sits separate from other divisions within a company. Like countless other tech companies, Facebook merges elements of the two forms. It relies upon flexible teams to solve problems that tend to cross the normal divisional and functional bounds. Communication and coordination is prioritized among teams and Facebook invests heavily to ensure cross-company collaboration. 

Advocates think that undoing the WhatsApp and Instagram mergers will be easy, but there aren’t clean divisional lines within the company. Indeed, Facebook has been working towards a vast reengineering of its backend for some time that, when completed later this year or early 2020, will effectively merge all of the companies into one ecosystem.  Attempting to dismember this ecosystem would almost certainly be disastrous; not just a legal nightmare, but a technical and organizational nightmare as well.

Much like American Tobacco, any attempt to split off WhatsApp and Instagram from Facebook will probably fall flat on its face because government officials will have to create three regulated firms, each with essentially duplicative structures. As a result, the quality of services offered to consumers will likely be inferior to those available from the integrated firm. In other words, this would be a net loss to consumers.

I’m of two minds on the issue of tech expertise in Congress.

Yes there is good evidence that members of Congress and Congressional staff don’t have broad technical expertise. Scholars Zach Graves and Kevin Kosar have detailed these problems, as well as Travis Moore who wrote, “Of the 3,500 legislative staff on the Hill, I’ve found just seven that have any formal technical training.” Moore continued with a description of his time as a staffer that I think is honest,

In Congress, especially in a member’s office, very few people are subject-matter experts. The best staff depend on a network of trusted friends and advisors, built from personal relationships, who can help them break down the complexities of an issue.

But on the other hand, it is not clear that more tech expertise at Congress’ disposal would lead to better outcomes. Over at the American Action Forum, I explored this topic in depth. Since publishing that piece in October, I’ve come to recognize two gaps that I didn’t address in that original piece. The first relates to expert bias and the second concerns office organization.  

Expert Bias In Tech Regulation

Let’s assume for the moment that legislators do become more technically proficient by any number of means. If policymakers are normal people, and let me tell you, they are, the result will be overconfidence of one sort or another. In psychology research, overconfidence includes three distinct ways of thinking. Overestimation is thinking that you are better than you are. Overplacement is the belief that you are better than others. And overprecision is excessive faith that you know the truth.

For political experts, overprecision is common. A long-term study of  over 82,000 expert political forecasts by Philip E. Tetlock found that this group performed worse than they would have if they just randomly chosen an outcome. In the technical parlance, this means expert opinions were not calibrated; there wasn’t a correspondence between the predicted probabilities and the observed frequencies. Moreover, Tetlock found that events that experts deemed impossible occurred with some regularity. In a number of fields, these non-likely events came into being as much as 20 or 30 percent of the time. As Tetlock and co-author Dan Gardner explained, “our ability to predict human affairs is impressive only in its mediocrity.”    

While there aren’t many studies on the topic of expertise within government, workers within agencies have been shown to have overconfidence as well. As researchers Xinsheng Liu, James Stoutenborough, and Arnold Vedlitz discovered in surveying bureaucrats,   

Our analyses demonstrate that (a) the level of issue‐specific expertise perceived by individual bureaucrats is positively associated with their work experience/job relevance to climate change, (b) more experienced bureaucrats tend to be more overconfident in assessing their expertise, and (c) overconfidence, independently of sociodemographic characteristics, attitudinal factors and political ideology, correlates positively with bureaucrats’ risk‐taking policy choices.    

The expert bias literature leads to two lessons. First, more expertise doesn’t necessarily lead to better predictions or outcomes. Indeed, there are good reasons to suspect that more expertise would lead to overconfident policymakers and more risky political ventures within the law.

But second, and more importantly, what is meant by tech expertise needs to be more closely examined. Advocates want better decision making processes within government, a laudable goal. But staffing government agencies and Congress with experts doesn’t get you there. Like countless other areas, there is a diminishing marginal predictive return for knowledge. Rather than an injection of expertise, better methods of judgement should be pursued. Getting to that point will be a much more difficult goal.

The Production Function of Political Offices

As last year was winding down, Google CEO Sundar Pichai appeared before the House Judiciary Committee to answer questions regarding Google’s search engine. The coverage of the event by various outlets was similar in taking to task members for their the apparent lack of knowledge about the search engine. Here is how Mashable’s Matt Binder described the event,  

The main topic of the hearing — anti-conservative bias within Google’s search engine — really puts how little Congress understands into perspective. Early on in the hearing, Rep. Lamar Smith claimed as fact that 96 percent of Google search results come from liberal sources. Besides being proven false with a simple search of your own, Google’s search algorithm bases search rankings on attributes such as backlinks and domain authority. Partisanship of the news outlet does not come into play. Smith asserted that he believe the results are being manipulated, regardless of being told otherwise.

Smith wasn’t alone as both Representative Steve Chabot and Representative Steve King brought up concerns of anti-conservative bias. Towards the end of piece Binder laid bare his concern, which is shared by many,

There are certainly many concerns and critiques to be had over algorithms and data collection when it comes to Google and its products like Google Search and Google Ads. Sadly, not much time was spent on this substance at Tuesday’s hearing. Google-owned YouTube, the second most trafficked website in the world after Google, was barely addressed at the hearing tool. [sic]

Notice the assumption built into this critique. True substantive debate would probe the data collection practices of Google instead of the bias of its search results. Using this framing, it seems clear that Congressional members don’t understand tech. But there is a better way to understand this hearing, which requires asking a more mundane question: Why is it that political actors like Representatives Chabot, King, and Smith were so concerned with how they appeared in Google results?

Political scientists Gary Lee Malecha and Daniel J. Reagan offer a convincing answer in The Public Congress. As they document, political offices over the past two decades have been reorientated by the 24-hours news cycle. Legislative life now unfolds live in front of cameras and microphones and on videos online. Over time, external communication has risen to a prominent role in Congressional political offices, in key ways overtaking policy analysis.

While this internal change doesn’t lend to any hard and fast conclusions, it could help explain why emboldened tech expertise hasn’t been a winning legislative issue. The demand just isn’t there. And based on the priorities they do display a preference for, it might not yield any benefits, while also giving offices a potential cover.      

All of this being said, there are convincing reasons why more tech expertise could be beneficial. Yet, policymakers and the public shouldn’t assume that these reforms will be unalloyed goods.

The negativity that surrounded the deal at its announcement made Whole Foods seem like an innocent player, but it is important to recall that they were hemorrhaging and were looking to exit. Throughout the 2010s, the company lost its market leading edge as others began to offer the same kinds of services and products. Still, the company was able to sell near the top of its value to Amazon because it was able to court so many suitors. Given all of these features, Whole Foods could have been using the exit as a mechanism to appropriate another firm’s rent.

Continue Reading...