These days, lacking a coherent legal theory presents no challenge to the would-be antitrust crusader. In a previous post, we noted how Shaoul Sussman’s predatory pricing claims against Amazon lacked a serious legal foundation. Sussman has returned with a new post, trying to build out his fledgling theory, but fares little better under even casual scrutiny.

According to Sussman, Amazon’s allegedly anticompetitive 

conduct not only cemented its role as the primary destination for consumers that shop online but also helped it solidify its power over brands.

Further, the company 

was willing to go to great lengths to ensure brand availability and inventory, including turning to the grey market, recruiting unauthorized sellers, and even selling diverted goods and counterfeits to its customers.

Sussman is trying to make out a fairly convoluted predatory pricing case, but once again without ever truly connecting the dots in a way that develops a cognizable antitrust claim. According to Sussman: 

Amazon sold products as a first-party to consumers on its platform at below average variable cost and [] Amazon recently began to recoup its losses by shifting the bulk of the transactions that occur on the website to its marketplace, where millions of third-party sellers pay hefty fees that enable Amazon to take a deep cut of every transaction.

Sussman now bases this claim on an allegation that Amazon relied on  “grey market” sellers on its platform, the presence of which forces legitimate brands onto the Amazon Marketplace. Moreover, Sussman claims that — somehow — these brands coming on board on Amazon’s terms forces those brands raise prices elsewhere, and the net effect of this process at scale is that prices across the economy have risen. 

As we detail below, Sussman’s chimerical argument depends on conflating unrelated concepts and relies on non-public anecdotal accounts to piece together an argument that, even if you squint at it, doesn’t make out a viable theory of harm.

Conflating legal reselling and illegal counterfeit selling as the “grey market”

The biggest problem with Sussman’s new theory is that he conflates pro-consumer unauthorized reselling and anti-consumer illegal counterfeiting, erroneously labeling both the “grey market”: 

Amazon had an ace up its sleeve. My sources indicate that the company deliberately turned to and empowered the “grey market“ — where both genuine, authentic goods and knockoffs are purchased and resold outside of brands’ intended distribution pipes — to dominate certain brands.

By definition, grey market goods are — as the link provided by Sussman states — “goods sold outside the authorized distribution channels by entities which may have no relationship with the producer of the goods.” Yet Sussman suggests this also encompasses counterfeit goods. This conflation is no minor problem for his argument. In general, the grey market is legal and beneficial for consumers. Brands such as Nike may try to limit the distribution of their products to channels the company controls, but they cannot legally prevent third parties from purchasing Nike products and reselling them on Amazon (or anywhere else).

This legal activity can increase consumer choice and can lead to lower prices, even though Sussman’s framing omits these key possibilities:

In the course of my conversations with former Amazon employees, some reported that Amazon actively sought out and recruited unauthorized sellers as both third-party sellers and first-party suppliers. Being unauthorized, these sellers were not bound by the brands’ policies and therefore outside the scope of their supervision.

In other words, Amazon actively courted third-party sellers who could bring legitimate goods, priced competitively, onto its platform. Perhaps this gives Amazon “leverage” over brands that would otherwise like to control the activities of legal resellers, but it’s exceedingly strange to try to frame this as nefarious or anticompetitive behavior.

Of course, we shouldn’t ignore the fact that there are also potential consumer gains when Amazon tries to restrict grey market activity by partnering with brands. But it is up to Amazon and the brands to determine through a contracting process when it makes the most sense to partner and control the grey market, or when consumers are better served by allowing unauthorized resellers. The point is: there is simply no reason to assume that either of these approaches is inherently problematic. 

Yet, even when Amazon tries to restrict its platform to authorized resellers, it exposes itself to a whole different set of complaints. In 2018, the company made a deal with Apple to bring the iPhone maker onto its marketplace platform. In exchange for Apple selling its products directly on Amazon, the latter agreed to remove unauthorized Apple resellers from the platform. Sussman portrays this as a welcome development in line with the policy changes he recommends. 

But news reports last month indicate the FTC is reviewing this deal for potential antitrust violations. One is reminded of Ronald Coase’s famous lament that he “had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down they said it was predatory pricing, and when they stayed the same they said it was tacit collusion.” It seems the same is true for Amazon and its relationship with the grey market.

Amazon’s incentive to remove counterfeits

What is illegal — and explicitly against Amazon’s marketplace rules  — is selling counterfeit goods. Counterfeit goods destroy consumer trust in the Amazon ecosystem, which is why the company actively polices its listings for abuses. And as Sussman himself notes, when there is an illegal counterfeit listing, “Brands can then file a trademark infringement lawsuit against the unauthorized seller in order to force Amazon to suspend it.”

Sussman’s attempt to hang counterfeiting problems around Amazon’s neck belies the actual truth about counterfeiting: probably the most cost-effective way to stop counterfeiting is simply to prohibit all third-party sellers. Yet, a serious cost-benefit analysis of Amazon’s platforms could hardly support such an action (and would harm the small sellers that antitrust activists seem most concerned about).

But, more to the point, if Amazon’s strategy is to encourage piracy, it’s doing a terrible job. It engages in litigation against known pirates, and earlier this year it rolled out a suite of tools (called Project Zero) meant to help brand owners report and remove known counterfeits. As part of this program, according to Amazon, “brands provide key data points about themselves (e.g., trademarks, logos, etc.) and we scan over 5 billion daily listing update attempts, looking for suspected counterfeits.” And when a brand identifies a counterfeit listing, they can remove it using a self-service tool (without needing approval from Amazon). 

Any large platform that tries to make it easy for independent retailers to reach customers is going to run into a counterfeit problem eventually. In his rush to discover some theory of predatory pricing to stick on Amazon, Sussman ignores the tradeoffs implicit in running a large platform that essentially democratizes retail:

Indeed, the democratizing effect of online platforms (and of technology writ large) should not be underestimated. While many are quick to disparage Amazon’s effect on local communities, these arguments fail to recognize that by reducing the costs associated with physical distance between sellers and consumers, e-commerce enables even the smallest merchant on Main Street, and the entrepreneur in her garage, to compete in the global marketplace.

In short, Amazon Marketplace is designed to make it as easy as possible for anyone to sell their products to Amazon customers. As the WSJ reported

Counterfeiters, though, have been able to exploit Amazon’s drive to increase the site’s selection and offer lower prices. The company has made the process to list products on its website simple—sellers can register with little more than a business name, email and address, phone number, credit card, ID and bank account—but that also has allowed impostors to create ersatz versions of hot-selling items, according to small brands and seller consultants.

The existence of counterfeits is a direct result of policies designed to lower prices and increase consumer choice. Thus, we would expect some number of counterfeits to exist as a result of running a relatively open platform. The question is not whether counterfeits exist, but — at least in terms of Sussman’s attempt to use antitrust law — whether there is any reason to think that Amazon’s conduct with respect to counterfeits is actually anticompetitive. But, even if we assume for the moment that there is some plausible way to draw a competition claim out of the existence of counterfeit goods on the platform, his theory still falls apart. 

There is both theoretical and empirical evidence for why Amazon is likely not engaged in the conduct Sussman describes. As a platform owner involved in a repeated game with customers, sellers, and developers, Amazon has an incentive to increase trust within the ecosystem. Counterfeit goods directly destroy that trust and likely decrease sales in the long run. If individuals can’t depend on the quality of goods on Amazon, they can easily defect to Walmart, eBay, or any number of smaller independent sellers. That’s why Amazon enters into agreements with companies like Apple to ensure there are only legitimate products offered. That’s also why Amazon actively sues counterfeiters in partnership with its sellers and brands, and also why Project Zero is a priority for the company.

Sussman relies on private, anecdotal claims while engaging in speculation that is entirely unsupported by public data 

Much of Sussman’s evidence is “[b]ased on conversations [he] held with former employees, sellers, and brands following the publication of [his] paper”, which — to put it mildly — makes it difficult for anyone to take seriously, let alone address head on. Here’s one example:

One third-party seller, who asked to remain anonymous, was willing to turn over his books for inspection in order to illustrate the magnitude of the increase in consumer prices. Together, we analyzed a single product, of which tens of thousands of units have been sold since 2015. The minimum advertised price for this single product, at any and all outlets, has increased more than 30 percent in the past four years. Despite this fact, this seller’s margins on this product are tighter than ever due to Amazon’s fee increases.

Needless to say, sales data showing the minimum advertised price for a single product “has increased more than 30 percent in the past four years” is not sufficient to prove, well, anything. At minimum, showing an increase in prices above costs would require data from a large and representative sample of sellers. All we have to go on from the article is a vague anecdote representing — maybe — one data point.

Not only is Sussman’s own data impossible to evaluate, but he bases his allegations on speculation that is demonstrably false. For instance, he asserts that Amazon used its leverage over brands in a way that caused retail prices to rise throughout the economy. But his starting point assumption is flatly contradicted by reality: 

To remedy this, Amazon once again exploited brands’ MAP policies. As mentioned, MAP policies effectively dictate the minimum advertised price of a given product across the entire retail industry. Traditionally, this meant that the price of a typical product in a brick and mortar store would be lower than the price online, where consumers are charged an additional shipping fee at checkout.

Sussman presents no evidence for the claim that “the price of a typical product in a brick and mortar store would be lower than the price online.” The widespread phenomenon of showrooming — when a customer examines a product at a brick-and-mortar store but then buys it for a lower price online — belies the notion that prices are higher online. One recent study by Nielsen found that “nearly 75% of grocery shoppers have used a physical store to ‘showroom’ before purchasing online.”

In fact, the company’s downward pressure on prices is so large that researchers now speculate that Amazon and other internet retailers are partially responsible for the low and stagnant inflation in the US over the last decade (dubbing this the “Amazon effect”). It is also curious that Sussman cites shipping fees as the reason prices are higher online while ignoring all the overhead costs of running a brick-and-mortar store which online retailers don’t incur. The assumption that prices are lower in brick-and-mortar stores doesn’t pass the laugh test.

Conclusion

Sussman can keep trying to tell a predatory pricing story about Amazon, but the more convoluted his theories get — and the less based in empirical reality they are — the less convincing they become. There is a predatory pricing law on the books, but it’s hard to bring a case because, as it turns out, it’s actually really hard to profitably operate as a predatory pricer. Speculating over complicated new theories might be entertaining, but it would be dangerous and irresponsible if these sorts of poorly supported theories were incorporated into public policy.

The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.

According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.

COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.

COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old). 

As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube. 

YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels. 

Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?

COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.

Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.

When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.

On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:

[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….

At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:

This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.

This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children. 

Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.

Ursula von der Leyen has just announced the composition of the next European Commission. For tech firms, the headline is that Margrethe Vestager will not only retain her job as the head of DG Competition, she will also oversee the EU’s entire digital markets policy in her new role as Vice-President in charge of digital policy. Her promotion within the Commission as well as her track record at DG Competition both suggest that the digital economy will continue to be the fulcrum of European competition and regulatory intervention for the next five years.

The regulation (or not) of digital markets is an extremely important topic. Not only do we spend vast swaths of both our professional and personal lives online, but firms operating in digital markets will likely employ an ever-increasing share of the labor force in the near future

Likely recognizing the growing importance of the digital economy, the previous EU Commission intervened heavily in the digital sphere over the past five years. This resulted in a series of high-profile regulations (including the GDPR, the platform-to-business regulation, and the reform of EU copyright) and competition law decisions (most notably the Google cases). 

Lauded by supporters of the administrative state, these interventions have drawn flak from numerous corners. This includes foreign politicians (especially  Americans) who see in these measures an attempt to protect the EU’s tech industry from its foreign rivals, as well as free market enthusiasts who argue that the old continent has moved further in the direction of digital paternalism. 

Vestager’s increased role within the new Commission, the EU’s heavy regulation of digital markets over the past five years, and early pronouncements from Ursula von der Leyen all suggest that the EU is in for five more years of significant government intervention in the digital sphere.

Vestager the slayer of Big Tech

During her five years as Commissioner for competition, Margrethe Vestager has repeatedly been called the most powerful woman in Brussels (see here and here), and it is easy to see why. Yielding the heavy hammer of European competition and state aid enforcement, she has relentlessly attacked the world’s largest firms, especially American’s so-called “Tech Giants”. 

The record-breaking fines imposed on Google were probably her most high-profile victory. When Vestager entered office, in 2014, the EU’s case against Google had all but stalled. The Commission and Google had spent the best part of four years haggling over a potential remedy that was ultimately thrown out. Grabbing the bull by the horns, Margrethe Vestager made the case her own. 

Five years, three infringement decisions, and 8.25 billion euros later, Google probably wishes it had managed to keep the 2014 settlement alive. While Vestager’s supporters claim that justice was served, Barack Obama and Donald Trump, among others, branded her a protectionist (although, as Geoffrey Manne and I have noted, the evidence for this is decidedly mixed). Critics also argued that her decisions would harm innovation and penalize consumers (see here and here). Regardless, the case propelled Vestager into the public eye. It turned her into one of the most important political forces in Brussels. Cynics might even suggest that this was her plan all along.

But Google is not the only tech firm to have squared off with Vestager. Under her watch, Qualcomm was slapped with a total of €1.239 Billion in fines. The Commission also opened an investigation into Amazon’s operation of its online marketplace. If previous cases are anything to go by, the probe will most probably end with a headline-grabbing fine. The Commission even launched a probe into Facebook’s planned Libra cryptocurrency, even though it has yet to be launched, and recent talk suggests it may never be. Finally, in the area of state aid enforcement, the Commission ordered Ireland to recover €13 Billion in allegedly undue tax benefits from Apple.   

Margrethe Vestager also initiated a large-scale consultation on competition in the digital economy. The ensuing report concluded that the answer was more competition enforcement. Its findings will likely be cited by the Commission as further justification to ramp up its already significant competition investigations in the digital sphere.

Outside of the tech sector, Vestager has shown that she is not afraid to adopt controversial decisions. Blocking the proposed merger between Siemens and Alstom notably drew the ire of Angela Merkel and Emmanuel Macron, as the deal would have created a European champion in the rail industry (a key political demand in Germany and France). 

These numerous interventions all but guarantee that Vestager will not be pushing for light touch regulation in her new role as Vice-President in charge of digital policy. Vestager is also unlikely to put a halt to some of the “Big Tech” investigations that she herself launched during her previous spell at DG Competition. Finally, given her evident political capital in Brussels, it’s a safe bet that she will be given significant leeway to push forward landmark initiatives of her choosing. 

Vestager the prophet

Beneath these attempts to rein-in “Big Tech” lies a deeper agenda that is symptomatic of the EU’s current zeitgeist. Over the past couple of years, the EU has been steadily blazing a trail in digital market regulation (although much less so in digital market entrepreneurship and innovation). Underlying this push is a worldview that sees consumers and small startups as the uninformed victims of gigantic tech firms. True to form, the EU’s solution to this problem is more regulation and government intervention. This is unlikely to change given the Commission’s new (old) leadership.

If digital paternalism is the dogma, then Margrethe Vestager is its prophet. As Thibault Schrepel has shown, her speeches routinely call for digital firms to act “fairly”, and for policymakers to curb their “power”. According to her, it is our democracy that is at stake. In her own words, “you can’t sensibly talk about democracy today, without appreciating the enormous power of digital technology”. And yet, if history tells us one thing, it is that heavy-handed government intervention is anathema to liberal democracy. 

The Commission’s Google decisions neatly illustrate this worldview. For instance, in Google Shopping, the Commission concluded that Google was coercing consumers into using its own services, to the detriment of competition. But the Google Shopping decision focused entirely on competitors, and offered no evidence showing actual harm to consumers (see here). Could it be that users choose Google’s products because they actually prefer them? Rightly or wrongly, the Commission went to great lengths to dismiss evidence that arguably pointed in this direction (see here, §506-538).

Other European forays into the digital space are similarly paternalistic. The General Data Protection Regulation (GDPR) assumes that consumers are ill-equipped to decide what personal information they share with online platforms. Queue a deluge of time-consuming consent forms and cookie-related pop-ups. The jury is still out on whether the GDPR has improved users’ privacy. But it has been extremely costly for businesses — American S&P 500 companies and UK FTSE 350 companies alone spent an estimated total of $9 billion to comply with the GDPR — and has at least temporarily slowed venture capital investment in Europe. 

Likewise, the recently adopted Regulation on platform-to-business relations operates under the assumption that small firms routinely fall prey to powerful digital platforms: 

Given that increasing dependence, the providers of those services [i.e. digital platforms] often have superior bargaining power, which enables them to, in effect, behave unilaterally in a way that can be unfair and that can be harmful to the legitimate interests of their businesses users and, indirectly, also of consumers in the Union. For instance, they might unilaterally impose on business users practices which grossly deviate from good commercial conduct, or are contrary to good faith and fair dealing. 

But the platform-to-business Regulation conveniently overlooks the fact that economic opportunism is a two-way street. Small startups are equally capable of behaving in ways that greatly harm the reputation and profitability of much larger platforms. The Cambridge Analytica leak springs to mind. And what’s “unfair” to one small business may offer massive benefits to other businesses and consumers

Make what you will about the underlying merits of these individual policies, we should at least recognize that they are part of a greater whole, where Brussels is regulating ever greater aspects of our online lives — and not clearly for the benefit of consumers. 

With Margrethe Vestager now overseeing even more of these regulatory initiatives, readers should expect more of the same. The Mission Letter she received from Ursula von der Leyen is particularly enlightening in that respect: 

I want you to coordinate the work on upgrading our liability and safety rules for digital platforms, services and products as part of a new Digital Services Act…. 

I want you to focus on strengthening competition enforcement in all sectors. 

A hard rain’s a gonna fall… on Big Tech

Today’s announcements all but confirm that the EU will stay its current course in digital markets. This is unfortunate.

Digital firms currently provide consumers with tremendous benefits at no direct charge. A recent study shows that median users would need to be paid €15,875 to give up search engines for a year. They would also require €536 in order to forgo WhatsApp for a month, €97 for Facebook, and €59 to drop digital maps for the same duration. 

By continuing to heap ever more regulations on successful firms, the EU risks killing the goose that laid the golden egg. This is not just a theoretical possibility. The EU’s policies have already put technology firms under huge stress, and it is not clear that this has always been outweighed by benefits to consumers. The GDPR has notably caused numerous foreign firms to stop offering their services in Europe. And the EU’s Google decisions have forced it to start charging manufacturers for some of its apps. Are these really victories for European consumers?

It is also worth asking why there are so few European leaders in the digital economy. Not so long ago, European firms such as Nokia and Ericsson were at the forefront of the digital revolution. Today, with the possible exception of Spotify, the EU has fallen further down the global pecking order in the digital economy. 

The EU knows this, and plans to invest €100 Billion in order to boost European tech startups. But these sums will be all but wasted if excessive regulation threatens the long-term competitiveness of European startups. 

So if more of the same government intervention isn’t the answer, then what is? Recognizing that consumers have agency and are responsible for their own decisions might be a start. If you don’t like Facebook, close your account. Want a search engine that protects your privacy? Try DuckDuckGo. If YouTube and Spotify’s suggestions don’t appeal to you, create your own playlists and turn off the autoplay functions. The digital world has given us more choice than we could ever have dreamt of; but this comes with responsibility. Both Margrethe Vestager and the European institutions have often seemed oblivious to this reality. 

If the EU wants to turn itself into a digital economy powerhouse, it will have to switch towards light-touch regulation that allows firms to experiment with disruptive services, flexible employment options, and novel monetization strategies. But getting there requires a fundamental rethink — one that the EU’s previous leadership refused to contemplate. Margrethe Vestager’s dual role within the next Commission suggests that change isn’t coming any time soon.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

And if David finds out the data beneath his profile, you’ll start to be able to connect the dots in various ways with Facebook and Cambridge Analytica and Trump and Brexit and all these loosely-connected entities. Because you get to see inside the beast, you get to see inside the system.

This excerpt from the beginning of Netflix’s The Great Hack shows the goal of the documentary: to provide one easy explanation for Brexit and the election of Trump, two of the most surprising electoral outcomes in recent history.

Unfortunately, in attempting to tell a simple narrative, the documentary obscures more than it reveals about what actually happened in the Facebook-Cambridge Analytica data scandal. In the process, the film wildly overstates the significance of the scandal in either the 2016 US presidential election or the 2016 UK referendum on leaving the EU.

In this article, I will review the background of the case and show seven things the documentary gets wrong about the Facebook-Cambridge Analytica data scandal.

Background

In 2013, researchers published a paper showing that you could predict some personality traits — openness and extraversion — from an individual’s Facebook Likes. Cambridge Analytica wanted to use Facebook data to create a “psychographic” profile — i.e., personality type — of each voter and then micro-target them with political messages tailored to their personality type, ultimately with the hope of persuading them to vote for Cambridge Analytica’s client (or at least to not vote for the opposing candidate).

In this case, the psychographic profile is the person’s Big Five (or OCEAN) personality traits, which research has shown are relatively stable throughout our lives:

  1. Openness to new experiences
  2. Conscientiousness
  3. Extroversion
  4. Agreeableness
  5. Neuroticism

But how to get the Facebook data to create these profiles? A researcher at Cambridge University, Alex Kogan, created an app called thisismydigitallife, a short quiz for determining your personality type. Between 250,000 and 270,000 people were paid a small amount of money to take this quiz. 

Those who took the quiz shared some of their own Facebook data as well as their friends’ data (so long as the friends’ privacy settings allowed third-party app developers to access their data). 

This process captured data on “at least 30 million identifiable U.S. consumers”, according to the FTC. For context, even if we assume all 30 million were registered voters, that means the data could be used to create profiles for less than 20 percent of the relevant population. And though some may disagree with Facebook’s policy for sharing user data with third-party developers, collecting data in this manner was in compliance with Facebook’s terms of service at the time.

What crossed the line was what happened next. Kogan then sold that data to Cambridge Analytica, without the consent of the affected Facebook users and in express violation of Facebook’s prohibition on selling Facebook data between third and fourth parties. 

Upon learning of the sale, Facebook directed Alex Kogan and Cambridge Analytica to delete the data. But the social media company failed to notify users that their data had been misused or confirm via an independent audit that the data was actually deleted.

1. Cambridge Analytica was selling snake oil (no, you are not easily manipulated)

There’s a line in The Great Hack that sums up the opinion of the filmmakers and the subjects in their story: “There’s 2.1 billion people, each with their own reality. And once everybody has their own reality, it’s relatively easy to manipulate them.” According to the latest research from political science, this is completely bogus (and it’s the same marketing puffery that Cambridge Analytica would pitch to prospective clients).

The best evidence in this area comes from Joshua Kalla and David E. Broockman in a 2018 study published by American Political Science Review:

We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero. First, a systematic meta-analysis of 40 field experiments estimates an average effect of zero in general elections. Second, we present nine original field experiments that increase the statistical evidence in the literature about the persuasive effects of personal contact 10-fold. These experiments’ average effect is also zero.

In other words, a meta-analysis covering 49 high-quality field experiments found that in US general elections, advertising has zero effect on the outcome. (However, there is evidence “campaigns are able to have meaningful persuasive effects in primary and ballot measure campaigns, when partisan cues are not present.”)

But the relevant conclusion for the Cambridge Analytica scandal remains the same: in highly visible elections with a polarized electorate, it simply isn’t that easy to persuade voters to change their minds.

2. Micro-targeting political messages is overrated — people prefer general messages on shared beliefs

But maybe Cambridge Analytica’s micro-targeting strategy would result in above-average effects? The literature provides reason for skepticism here as well. Another paper by Eitan D. Hersh and Brian F. Schaffner in The Journal of Politics found that voters “rarely prefer targeted pandering to general messages” and “seem to prefer being solicited based on broad principles and collective beliefs.” It’s political tribalism all the way down. 

A field experiment with 56,000 Wisconsin voters in the 2008 US presidential election found that “persuasive appeals possibly reduced candidate support and almost certainly did not increase it,” suggesting that  “contact by a political campaign can engender a backlash.”

3. Big Five personality traits are not very useful for predicting political orientation

Or maybe there’s something special about targeting political messages based on a person’s Big Five personality traits? Again, there is little reason to believe this is the case. As Kris-Stella Trump mentions in an article for The Washington Post

The ‘Big 5’ personality traits … only predict about 5 percent of the variation in individuals’ political orientations. Even accurate personality data would only add very little useful information to a data set that includes people’s partisanship — which is what most campaigns already work with.

The best evidence we have on the importance of personality traits on decision-making comes from the marketing literature (n.b., it’s likely easier to influence consumer decisions than political decisions in today’s increasingly polarized electorate). Here too the evidence is weak:

In this successful study, researchers targeted ads, based on personality, to more than 1.5 million people; the result was about 100 additional purchases of beauty products than had they advertised without targeting.

More to the point, the Facebook data obtained by Cambridge Analytica couldn’t even accomplish the simple task of matching Facebook Likes to the Big Five personality traits. Here’s Cambridge University researcher Alex Kogan in Michael Lewis’s podcast episode about the scandal: 

We started asking the question of like, well, how often are we right? And so there’s five personality dimensions? And we said like, okay, for what percentage of people do we get all five personality categories correct? We found it was like 1%.

Eitan Hersh, an associate professor of political science at Tufts University, summed it up best: “Every claim about psychographics etc made by or about [Cambridge Analytica] is BS.

4. If Cambridge Analytica’s “weapons-grade communications techniques” were so powerful, then Ted Cruz would be president

The Great Hack:

Ted Cruz went from the lowest rated candidate in the primaries to being the last man standing before Trump got the nomination… Everyone said Ted Cruz had this amazing ground game, and now we know who came up with all of it. Joining me now, Alexander Nix, CEO of Cambridge Analytica, the company behind it all.

Reporting by Nicholas Confessore and Danny Hakim at The New York Times directly contradicts this framing on Cambridge Analytica’s role in the 2016 Republican presidential primary:

Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates.

Most significantly, the Cruz campaign stopped using Cambridge Analytica’s services in February 2016 due to disappointing results, as Kenneth P. Vogel and Darren Samuelsohn reported in Politico in June of that year:

Cruz’s data operation, which was seen as the class of the GOP primary field, was disappointed in Cambridge Analytica’s services and stopped using them before the Nevada GOP caucuses in late February, according to a former staffer for the Texas Republican.

“There’s this idea that there’s a magic sauce of personality targeting that can overcome any issue, and the fact is that’s just not the case,” said the former staffer, adding that Cambridge “doesn’t have a level of understanding or experience that allows them to target American voters.”

Vogel later tweeted that most firms hired Cambridge Analytica “because it was seen as a prerequisite for receiving $$$ from the MERCERS.” So it seems campaigns hired Cambridge Analytica not for its “weapons-grade communications techniques” but for the firm’s connections to billionaire Robert Mercer.

5. The Trump campaign phased out Cambridge Analytica data in favor of RNC data for the general election

Just as the Cruz campaign became disillusioned after working with Cambridge Analytica during the primary, so too did the Trump campaign during the general election, as Major Garrett reported for CBS News:

The crucial decision was made in late September or early October when Mr. Trump’s son-in-law Jared Kushner and Brad Parscale, Mr. Trump’s digital guru on the 2016 campaign, decided to utilize just the RNC data for the general election and used nothing from that point from Cambridge Analytica or any other data vendor. The Trump campaign had tested the RNC data, and it proved to be vastly more accurate than Cambridge Analytica’s, and when it was clear the RNC would be a willing partner, Mr. Trump’s campaign was able to rely solely on the RNC.

And of the little work Cambridge Analytica did complete for the Trump campaign, none involved “psychographics,” The New York Times reported:

Mr. Bannon at one point agreed to expand the company’s role, according to the aides, authorizing Cambridge to oversee a $5 million purchase of television ads. But after some of them appeared on cable channels in Washington, D.C. — hardly an election battleground — Cambridge’s involvement in television targeting ended.

Trump aides … said Cambridge had played a relatively modest role, providing personnel who worked alongside other analytics vendors on some early digital advertising and using conventional micro-targeting techniques. Later in the campaign, Cambridge also helped set up Mr. Trump’s polling operation and build turnout models used to guide the candidate’s spending and travel schedule. None of those efforts involved psychographics.

6. There is no evidence that Facebook data was used in the Brexit referendum

Last year, the UK’s data protection authority fined Facebook £500,000 — the maximum penalty allowed under the law — for violations related to the Cambridge Analytica data scandal. The fine was astonishing considering that the investigation of Cambridge Analytica’s licensed data derived from Facebook “found no evidence that UK citizens were among them,” according to the BBC. This detail demolishes the second central claim of The Great Hack, that data fraudulently acquired from Facebook users enabled Cambridge Analytica to manipulate the British people into voting for Brexit. On this basis, Facebook is currently appealing the fine.

7. The Great Hack wasn’t a “hack” at all

The title of the film is an odd choice given the facts of the case, as detailed in the background section of this article. A “hack” is generally understood as an unauthorized breach of a computer system or network by a malicious actor. People think of a genius black hat programmer who overcomes a company’s cybersecurity defenses to profit off stolen data. Alex Kogan, the Cambridge University researcher who acquired the Facebook data for Cambridge Analytica, was nothing of the sort. 

As Gus Hurwitz noted in an article last year, Kogan entered into a contract with Facebook and asked users for their permission to acquire their data by using the thisismydigitallife personality app. Arguably, if there was a breach of trust, it was when the app users chose to share their friends’ data, too. The editorial choice to call this a “hack” instead of “data collection” or “data scraping” is of a piece with the rest of the film; when given a choice between accuracy and sensationalism, the directors generally chose the latter.

Why does this narrative persist despite the facts of the case?

The takeaway from the documentary is that Cambridge Analytica hacked Facebook and subsequently undermined two democratic processes: the Brexit referendum and the 2016 US presidential election. The reason this narrative has stuck in the public consciousness is that it serves everyone’s self-interest (except, of course, Facebook’s).

It lets voters off the hook for what seem, to many, to be drastic mistakes (i.e., electing a reality TV star president and undoing the European project). If we were all manipulated into making the “wrong” decision, then the consequences can’t be our fault! 

This narrative also serves Cambridge Analytica, to a point. For a time, the political consultant liked being able to tell prospective clients that it was the mastermind behind two stunning political upsets. Lastly, journalists like the story because they compete with Facebook in the advertising market and view the tech giant as an existential threat.

There is no evidence for the film’s implicit assumption that, but for Cambridge Analytica’s use of Facebook data to target voters, Trump wouldn’t have been elected and the UK wouldn’t have voted to leave the EU. Despite its tone and ominous presentation style, The Great Hack fails to muster any support for its extreme claims. The truth is much more mundane: the Facebook-Cambridge Analytica data scandal was neither a “hack” nor was it “great” in historical importance.

The documentary ends with a question:

But the hardest part in all of this is that these wreckage sites and crippling divisions begin with the manipulation of one individual. Then another. And another. So, I can’t help but ask myself: Can I be manipulated? Can you?

No — but the directors of The Great Hack tried their best to do so.

Paul H. Rubin is the Dobbs Professor of Economics Emeritus, Emory University, and President, Southern Economic Association, 2013

I want to thank Geoff for inviting me to blog about my new book.

My book, The Capitalist Paradox: How Cooperation Enables Free Market Competition, Bombardier Books, 2019, has been published. The main question I address in this short book is: Given the obvious benefits of markets over socialism, why do so many still oppose markets? I have been concerned with this issue for many years. Given the current state of American politics, the question is even more important than when I began the book.

I begin by pointing out that humans are not good intuitive economists. Our minds evolved in a simple setting where the economy was simple, with little trade, little specialization (except by age and gender), and little capital. In this world there was no need for our brains to evolve to understand economics. (Politics is a different story.) The main takeaway from this world was that our minds evolved to view the world as zero-sum.  Zero-sum thinking is the error behind most policy errors in economics.

The second part of the argument is that in many cases, when economists are discussing efficiency issues (such as optimal taxation) listeners are hearing distribution issues. So we economists would do better to begin with a discussion showing that there are efficiency (“size of the pie”) effects before showing what they are in a particular case.  That is, we should show that taxation can affect total income before showing how it does so in a particular case. I call this “really basic economics,” which should be taught before basic economics. It is sometimes said that experts understand their field so well that they are “mind blind” to the basics, and that is the situation here.

I then show that competition is an improper metaphor for economics.  Discussions of competition brings up sports (and in economics the notion of competition was borrowed from sports) and sports is zero-sum. Thus, when economists discuss competition, they reinforce people’s notion that economics is zero sum.  People do not like competition. A quote from the book:

Here are some common modifiers of “competition” and the number of Google references to each:

“Cutthroat competition” (256,000), “excessive competition” (159,000), “destructive competition” (105,000), “ruthless competition” (102,000), “ferocious competition” (66,700), “vicious competition” (53,500), “unfettered competition” (37,000), “unrestrained competition” (34,500), “harmful competition” (18,000), and “dog-eat-dog competition” (15, 000). Conversely, for “beneficial competition” there are 16,400 references. For “beneficial cooperation” there are 548,000 references, and almost no references to any of the negative modifiers of cooperation.

The final point, and what ties it all together, is a discussion showing that the economy is actually more cooperative than it is competitive. There are more cooperative relationships in an economy than there are competitive interactions.  The basic economic element is a transaction, and transactions are cooperative.  Competition chooses the best agents to cooperate with, but cooperation does the work and creates the consumer surplus. Thus, referring to markets as “cooperative” rather than “competitive” would not only reduce hostility towards markets, but would also be more accurate.

An economist reading this book would probably not learn much economics. I do not advocate any major change in economic theory from competition to cooperation. But I propose a different way to view the economy, and one that might help us better explain what we are doing to students and to policy makers, including voters.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

[This post is the seventh in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Alec Stapp, Research Fellow at the International Center for Law & Economics]

Should we break up Microsoft? 

In all the talk of breaking up “Big Tech,” no one seems to mention the biggest tech company of them all. Microsoft’s market cap is currently higher than those of Apple, Google, Amazon, and Facebook. If big is bad, then, at the moment, Microsoft is the worst.

Apart from size, antitrust activists also claim that the structure and behavior of the Big Four — Facebook, Google, Apple, and Amazon — is why they deserve to be broken up. But they never include Microsoft, which is curious given that most of their critiques also apply to the largest tech giant:

  1. Microsoft is big (current market cap exceeds $1 trillion)
  2. Microsoft is dominant in narrowly-defined markets (e.g., desktop operating systems)
  3. Microsoft is simultaneously operating and competing on a platform (i.e., the Microsoft Store)
  4. Microsoft is a conglomerate capable of leveraging dominance from one market into another (e.g., Windows, Office 365, Azure)
  5. Microsoft has its own “kill zone” for startups (196 acquisitions since 1994)
  6. Microsoft operates a search engine that preferences its own content over third-party content (i.e., Bing)
  7. Microsoft operates a platform that moderates user-generated content (i.e., LinkedIn)

To be clear, this is not to say that an antitrust case against Microsoft is as strong as the case against the others. Rather, it is to say that the cases against the Big Four on these dimensions are as weak as the case against Microsoft, as I will show below.

Big is bad

Tim Wu published a book last year arguing for more vigorous antitrust enforcement — including against Big Tech — called “The Curse of Bigness.” As you can tell by the title, he argues, in essence, for a return to the bygone era of “big is bad” presumptions. In his book, Wu mentions “Microsoft” 29 times, but only in the context of its 1990s antitrust case. On the other hand, Wu has explicitly called for antitrust investigations of Amazon, Facebook, and Google. It’s unclear why big should be considered bad when it comes to the latter group but not when it comes to Microsoft. Maybe bigness isn’t actually a curse, after all.

As the saying goes in antitrust, “Big is not bad; big behaving badly is bad.” This aphorism arose to counter erroneous reasoning during the era of structure-conduct-performance when big was presumed to mean bad. Thanks to an improved theoretical and empirical understanding of the nature of the competitive process, there is now a consensus that firms can grow large either via superior efficiency or by engaging in anticompetitive behavior. Size alone does not tell us how a firm grew big — so it is not a relevant metric.

Dominance in narrowly-defined markets

Critics of Google say it has a monopoly on search and critics of Facebook say it has a monopoly on social networking. Microsoft is similarly dominant in at least a few narrowly-defined markets, including desktop operating systems (Windows has a 78% market share globally): 

Source: StatCounter

Microsoft is also dominant in the “professional networking platform” market after its acquisition of LinkedIn in 2016. And the legacy tech giant is still the clear leader in the “paid productivity software” market. (Microsoft’s Office 365 revenue is roughly 10x Google’s G Suite revenue).

The problem here is obvious. These are overly-narrow market definitions for conducting an antitrust analysis. Is it true that Facebook’s platforms are the only service that can connect you with your friends? Should we really restrict the productivity market to “paid”-only options (as the EU similarly did in its Android decision) when there are so many free options available? These questions are laughable. Proper market definition requires considering whether a hypothetical monopolist could profitably impose a small but significant and non-transitory increase in price (SSNIP). If not (which is likely the case in the narrow markets above), then we should employ a broader market definition in each case.

Simultaneously operating and competing on a platform

Elizabeth Warren likes to say that if you own a platform, then you shouldn’t both be an umpire and have a team in the game. Let’s put aside the problems with that flawed analogy for now. What she means is that you shouldn’t both run the platform and sell products, services, or apps on that platform (because it’s inherently unfair to the other sellers). 

Warren’s solution to this “problem” would be to create a regulated class of businesses called “platform utilities” which are “companies with an annual global revenue of $25 billion or more and that offer to the public an online marketplace, an exchange, or a platform for connecting third parties.” Microsoft’s revenue last quarter was $32.5 billion, so it easily meets the first threshold. And Windows obviously qualifies as “a platform for connecting third parties.”

Just as in mobile operating systems, desktop operating systems are compatible with third-party applications. These third-party apps can be free (e.g., iTunes) or paid (e.g., Adobe Photoshop). Of course, Microsoft also makes apps for Windows (e.g., Word, PowerPoint, Excel, etc.). But the more you think about the technical details, the blurrier the line between the operating system and applications becomes. Is the browser an add-on to the OS or a part of it (as Microsoft Edge appears to be)? The most deeply-embedded applications in an OS are simply called “features.”

Even though Warren hasn’t explicitly mentioned that her plan would cover Microsoft, it almost certainly would. Previously, she left Apple out of the Medium post announcing her policy, only to later tell a journalist that the iPhone maker would also be prohibited from producing its own apps. But what Warren fails to include in her announcement that she would break up Apple is that trying to police the line between a first-party platform and third-party applications would be a nightmare for companies and regulators, likely leading to less innovation and higher prices for consumers (as they attempt to rebuild their previous bundles).

Leveraging dominance from one market into another

The core critique in Lina Khan’s “Amazon’s Antitrust Paradox” is that the very structure of Amazon itself is what leads to its anticompetitive behavior. Khan argues (in spite of the data) that Amazon uses profits in some lines of business to subsidize predatory pricing in other lines of businesses. Furthermore, she claims that Amazon uses data from its Amazon Web Services unit to spy on competitors and snuff them out before they become a threat.

Of course, this is similar to the theory of harm in Microsoft’s 1990s antitrust case, that the desktop giant was leveraging its monopoly from the operating system market into the browser market. Why don’t we hear the same concern today about Microsoft? Like both Amazon and Google, you could uncharitably describe Microsoft as extending its tentacles into as many sectors of the economy as possible. Here are some of the markets in which Microsoft competes (and note how the Big Four also compete in many of these same markets):

What these potential antitrust harms leave out are the clear consumer benefits from bundling and vertical integration. Microsoft’s relationships with customers in one market might make it the most efficient vendor in related — but separate — markets. It is unsurprising, for example, that Windows customers would also frequently be Office customers. Furthermore, the zero marginal cost nature of software makes it an ideal product for bundling, which redounds to the benefit of consumers.

The “kill zone” for startups

In a recent article for The New York Times, Tim Wu and Stuart A. Thompson criticize Facebook and Google for the number of acquisitions they have made. They point out that “Google has acquired at least 270 companies over nearly two decades” and “Facebook has acquired at least 92 companies since 2007”, arguing that allowing such a large number of acquisitions to occur is conclusive evidence of regulatory failure.

Microsoft has made 196 acquisitions since 1994, but they receive no mention in the NYT article (or in most of the discussion around supposed “kill zones”). But the acquisitions by Microsoft or Facebook or Google are, in general, not problematic. They provide a crucial channel for liquidity in the venture capital and startup communities (the other channel being IPOs). According to the latest data from Orrick and Crunchbase, between 2010 and 2018, there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion

By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. Making it harder for a startup to be acquired would not result in more venture capital investment (and therefore not in more IPOs), according to recent research by Gordon M. Phillips and Alexei Zhdanov. The researchers show that “the passage of a pro-takeover law in a country is associated with more subsequent VC deals in that country, while the enactment of a business combination antitakeover law in the U.S. has a negative effect on subsequent VC investment.”

As investor and serial entrepreneur Leonard Speiser said recently, “If the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” 

Search engine bias

Google is often accused of biasing its search results to favor its own products and services. The argument goes that if we broke them up, a thousand search engines would bloom and competition among them would lead to less-biased search results. While it is a very difficult — if not impossible — empirical question to determine what a “neutral” search engine would return, one attempt by Josh Wright found that “own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing.” 

The report goes on to note that “Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%).” Arguably, users of a particular search engine might be more interested in seeing content from that company because they have a preexisting relationship. But regardless of how we interpret these results, it’s clear this not a frequent phenomenon.

So why is Microsoft being left out of the antitrust debate now?

One potential reason why Google, Facebook, and Amazon have been singled out for criticism of practices that seem common in the tech industry (and are often pro-consumer) may be due to the prevailing business model in the journalism industry. Google and Facebook are by far the largest competitors in the digital advertising market, and Amazon is expected to be the third-largest player by next year, according to eMarketer. As Ramsi Woodcock pointed out, news publications are also competing for advertising dollars, the type of conflict of interest that usually would warrant disclosure if, say, a journalist held stock in a company they were covering.

Or perhaps Microsoft has successfully avoided receiving the same level of antitrust scrutiny as the Big Four because it is neither primarily consumer-facing like Apple or Amazon nor does it operate a platform with a significant amount of political speech via user-generated content (UGC) like Facebook or Google (YouTube). Yes, Microsoft moderates content on LinkedIn, but the public does not get outraged when deplatforming merely prevents someone from spamming their colleagues with requests “to add you to my professional network.”

Microsoft’s core areas are in the enterprise market, which allows it to sidestep the current debates about the supposed censorship of conservatives or unfair platform competition. To be clear, consumer-facing companies or platforms with user-generated content do not uniquely merit antitrust scrutiny. On the contrary, the benefits to consumers from these platforms are manifest. If this theory about why Microsoft has escaped scrutiny is correct, it means the public discussion thus far about Big Tech and antitrust has been driven by perception, not substance.


[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]

The pretense of ignorance

Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).

This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.

Ignorance + fear = fame

In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”

In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.

The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.

But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.

Some data, one trend

In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.

In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).

In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.

And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.

Epilogue: who knows, life is not a Tarantino movie

For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.

Advanced broadband networks, including 5G, fiber, and high speed cable, are hot topics, but little attention is paid to the critical investments in infrastructure necessary to make these networks a reality. Each type of network has its own unique set of challenges to solve, both technically and legally. Advanced broadband delivered over cable systems, for example, not only has to incorporate support and upgrades for the physical infrastructure that facilitates modern high-definition television signals and high-speed Internet service, but also needs to be deployed within a regulatory environment that is fragmented across the many thousands of municipalities in the US. Oftentimes, the complexity of managing such a regulatory environment can be just as difficult as managing the actual provision of service. 

The FCC has taken aim at one of these hurdles with its proposed Third Report and Order on the interpretation of Section 621 of the Cable Act, which is on the agenda for the Commission’s open meeting later this week. The most salient (for purposes of this post) feature of the Order is how the FCC intends to shore up the interpretation of the Cable Act’s limitation on cable franchise fees that municipalities are permitted to levy. 

The Act was passed and later amended in a way that carefully drew lines around the acceptable scope of local franchising authorities’ de facto monopoly power in granting cable franchises. The thrust of the Act was to encourage competition and build-out by discouraging franchising authorities from viewing cable providers as a captive source of unlimited revenue. It did this while also giving franchising authorities the tools necessary to support public, educational, and governmental programming and enabling them to be fairly compensated for use of the public rights of way. Unfortunately, since the 1984 Cable Act was passed, an increasing number of local and state franchising authorities (“LFAs”) have attempted to work around the Act’s careful balance. In particular, these efforts have created two main problems.

First, LFAs frequently attempt to evade the Act’s limitation on franchise fees to five percent of cable revenues by seeking a variety of in-kind contributions from cable operators that impose costs over and above the statutorily permitted five percent limit. LFAs do this despite the plain language of the statute defining franchise fees quite broadly as including any “tax, fee, or assessment of any kind imposed by a franchising authority or any other governmental entity.”

Although not nominally “fees,” such requirements are indisputably “assessments,” and the costs of such obligations are equivalent to the marginal cost of a cable operator providing those “free” services and facilities, as well as the opportunity cost (i.e., the foregone revenue) of using its fixed assets in the absence of a state or local franchise obligation. Any such costs will, to some extent, be passed on to customers as higher subscription prices, reduced quality, or both. By carefully limiting the ability of LFAs to abuse their bargaining position, Congress ensured that they could not extract disproportionate rents from cable operators (and, ultimately, their subscribers).

Second, LFAs also attempt to circumvent the franchise fee cap of five percent of gross cable revenues by seeking additional fees for non-cable services provided over mixed use networks (i.e. imposing additional franchise fees on the provision of broadband and other non-cable services over cable networks). But the statute is similarly clear that LFAs or other governmental entities cannot regulate non-cable services provided via franchised cable systems.

My colleagues and I at ICLE recently filed an ex parte letter on these issues that analyzes the law and economics of both the underlying statute and the FCC’s proposed rulemaking that would affect the interpretation of cable franchise fees. For a variety of reasons set forth in the letter, we believe that the Commission is on firm legal and economic footing to adopt its proposed Order.  

It should be unavailing – and legally irrelevant – to argue, as many LFAs have, that declining cable franchise revenue leaves municipalities with an insufficient source of funds to finance their activities, and thus that recourse to these other sources is required. Congress intentionally enacted the five percent revenue cap to prevent LFAs from relying on cable franchise fees as an unlimited general revenue source. In order to maintain the proper incentives for network buildout — which are ever more-critical as our economy increasingly relies on high-speed broadband networks — the Commission should adopt the proposed Order.