Archives For technology

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!

The Federal Trade Commission’s (FTC) regrettable January 17 filing of a federal court injunctive action against Qualcomm, in the waning days of the Obama Administration, is a blow to its institutional integrity and well-earned reputation as a top notch competition agency.

Stripping away the semantic gloss, the heart of the FTC’s complaint is that Qualcomm is charging smartphone makers “too much” for licenses needed to practice standardized cellular communications technologies – technologies that Qualcomm developed. This complaint flies in the face of the Supreme Court’s teaching in Verizon v. Trinko that a monopolist has every right to charge monopoly prices and thereby enjoy the full fruits of its legitimately obtained monopoly. But Qualcomm is more than one exceptionally ill-advised example of prosecutorial overreach, that (hopefully) will fail and end up on the scrapheap of unsound federal antitrust initiatives. The Qualcomm complaint undoubtedly will be cited by aggressive foreign competition authorities as showing that American antitrust enforcement now recognizes mere “excessive pricing” as a form of “monopoly abuse” – therefore justifying “excessive pricing” cases that are growing like topsy abroad, especially in East Asia.

Particularly unfortunate is the fact that the Commission chose to authorize the filing by a 2-1 vote, which ignored Commissioner Maureen Ohlhausen’s pithy dissent – a rarity in cases involving the filing of federal lawsuits. Commissioner Ohlhausen’s analysis skewers the legal and economic basis for the FTC’s complaint, and her summary, which includes an outstanding statement of basic antitrust enforcement principles, is well worth noting (footnote omitted):

My practice is not to write dissenting statements when the Commission, against my vote, authorizes litigation. That policy reflects several principles. It preserves the integrity of the agency’s mission, recognizes that reasonable minds can differ, and supports the FTC’s staff, who litigate demanding cases for consumers’ benefit. On the rare occasion when I do write, it has been to avoid implying that I disagree with the complaint’s theory of liability.

I do not depart from that policy lightly. Yet, in the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections.

Let us hope that President Trump makes it an early and high priority to name Commissioner Ohlhausen Acting Chairman of the FTC. The FTC simply cannot afford any more embarrassing and ill-reasoned antitrust initiatives that undermine basic principles of American antitrust enforcement and may be used by foreign competition authorities to justify unwarranted actions against American firms. Maureen Ohlhausen can be counted upon to provide needed leadership in moving the Commission in a sounder direction.

P.S. I have previously published a commentary at this site regarding an unwarranted competition law Statement of Objections directed at Google by the European Commission, a matter which did not involve patent licensing. And for a more general critique of European competition policy along these lines, see here.

Yesterday the Chairman and Ranking Member of the House Judiciary Committee issued the first set of policy proposals following their long-running copyright review process. These proposals were principally aimed at ensuring that the IT demands of the Copyright Office were properly met so that it could perform its assigned functions, and to provide adequate authority for it to adapt its policies and practices to the evolving needs of the digital age.

In response to these modest proposals, Public Knowledge issued a telling statement, calling for enhanced scrutiny of these proposals related to an agency “with a documented history of regulatory capture.”

The entirety of this “documented history,” however, is a paper published by Public Knowledge itself alleging regulatory capture—as evidenced by the fact that 13 people had either gone from the Copyright Office to copyright industries or vice versa over the past 20+ years. The original document was brilliantly skewered by David Newhoff in a post on the indispensable blog, Illusion of More:

To support its premise, Public Knowledge, with McCarthy-like righteousness, presents a list—a table of thirteen former or current employees of the Copyright Office who either have worked for private-sector, rights-holding organizations prior to working at the Office or who are  now working for these private entities after their terms at the Office. That thirteen copyright attorneys over a 22-year period might be employed in some capacity for copyright owners is a rather unremarkable observation, but PK seems to think it’s a smoking gun…. Or, as one of the named thirteen, Steven Tepp, observes in his response, PK also didn’t bother to list the many other Copyright Office employees who, “went to Internet and tech companies, the Smithsonian, the FCC, and other places that no one would mistake for copyright industries.” One might almost get the idea that experienced copyright attorneys pursue various career paths or something.

Not content to rest on the laurels of its groundbreaking report of Original Sin, Public Knowledge has now doubled down on its audacity, using its own previous advocacy as the sole basis to essentially impugn an entire agency, without more. But, as advocacy goes, that’s pretty specious. Some will argue that there is an element of disingenuousness in all advocacy, even if it is as benign as failing to identify the weaknesses of one’s arguments—and perhaps that’s true. (We all cite our own work at one time or another, don’t we?) But that’s not the situation we have before us. Instead, Public Knowledge creates its own echo chamber, effectively citing only its own idiosyncratic policy preferences as the “documented” basis for new constraints on the Copyright Office. Even in a world of moral relativism, bubbles of information, and competing narratives about the truth, this should be recognizable as thin gruel.

So why would Public Knowledge expose itself in this manner? What is to be gained by seeking to impugn the integrity of the Copyright Office? There the answer is relatively transparent: PK hopes to capitalize on the opportunity to itself capture Copyright Office policy-making by limiting the discretion of the Copyright Office, and by turning it into an “objective referee” rather than the nation’s steward for ensuring the proper functioning of the copyright system.

PK claims that the Copyright Office should not be involved in making copyright policy, other than perhaps technically transcribing the agreements reached by other parties. Thus, in its “indictment” of the Copyright Office (which it now risibly refers to as the Copyright Office’s “documented history of capture”), PK wrote that:

These statements reflect the many specific examples, detailed in Section II, in which the Copyright Office has acted more as an advocate for rightsholder interests than an objective referee of copyright debates.

Essentially, PK seems to believe that copyright policy should be the province of self-proclaimed “consumer advocates” like PK itself—and under no circumstances the employees of the Copyright Office who might actually deign to promote the interests of the creative community. After all, it is staffed by a veritable cornucopia of copyright industry shills: According to PK’s report, fully 1 of its 400 employees has either left the office to work in the copyright industry or joined the office from industry in each of the last 1.5 years! For reference (not that PK thinks to mention it) some 325 Google employees have worked in government offices in just the past 15 years. And Google is hardly alone in this. Good people get good jobs, whether in government, industry, or both. It’s hardly revelatory.

And never mind that the stated mission of the Copyright Office “is to promote creativity by administering and sustaining an effective national copyright system,” and that “the purpose of the copyright system has always been to promote creativity in society.” And never mind that Congress imbued the Office with the authority to make regulations (subject to approval by the Librarian of Congress) and directed the Copyright Office to engage in a number of policy-related functions, including:

  1. Advising Congress on national and international issues relating to copyright;
  2. Providing information and assistance to Federal departments and agencies and the Judiciary on national and international issues relating to copyright;
  3. Participating in meetings of international intergovernmental organizations and meetings with foreign government officials relating to copyright; and
  4. Conducting studies and programs regarding copyright.

No, according to Public Knowledge the Copyright Office is to do none of these things, unless it does so as an “objective referee of copyright debates.” But nowhere in the legislation creating the Office or amending its functions—nor anywhere else—is that limitation to be found; it’s just created out of whole cloth by PK.

The Copyright Office’s mission is not that of a content neutral referee. Rather, the Copyright Office is charged with promoting effective copyright protection. PK is welcome to solicit Congress to change the Copyright Act and the Office’s mandate. But impugning the agency for doing what it’s supposed to do is a deceptive way of going about it. PK effectively indicts and then convicts the Copyright Office for following its mission appropriately, suggesting that doing so could only have been the result of undue influence from copyright owners. But that’s manifestly false, given its purpose.

And make no mistake why: For its narrative to work, PK needs to define the Copyright Office as a neutral party, and show that its neutrality has been unduly compromised. Only then can Public Knowledge justify overhauling the office in its own image, under the guise of magnanimously returning it to its “proper,” neutral role.

Public Knowledge’s implication that it is a better defender of the “public” interest than those who actually serve in the public sector is a subterfuge, masking its real objective of transforming the nature of copyright law in its own, benighted image. A questionable means to a noble end, PK might argue. Not in our book. This story always turns out badly.

Last week, the Internet Association (“IA”) — a trade group representing some of America’s most dynamic and fastest growing tech companies, including the likes of Google, Facebook, Amazon, and eBay — presented the incoming Trump Administration with a ten page policy paper entitled “Policy Roadmap for New Administration, Congress.”

The document’s content is not surprising, given its source: It is, in essence, a summary of the trade association’s members’ preferred policy positions, none of which is new or newly relevant. Which is fine, in principle; lobbying on behalf of members is what trade associations do — although we should be somewhat skeptical of a policy document that purports to represent the broader social welfare while it advocates for members’ preferred policies.

Indeed, despite being labeled a “roadmap,” the paper is backward-looking in certain key respects — a fact that leads to some strange syntax: “[the document is a] roadmap of key policy areas that have allowed the internet to grow, thrive, and ensure its continued success and ability to create jobs throughout our economy” (emphasis added). Since when is a “roadmap” needed to identify past policies? Indeed, as Bloomberg News reporter, Joshua Brustein, wrote:

The document released Monday is notable in that the same list of priorities could have been sent to a President-elect Hillary Clinton, or written two years ago.

As a wishlist of industry preferences, this would also be fine, in principle. But as an ostensibly forward-looking document, aimed at guiding policy transition, the IA paper is disappointingly un-self-aware. Rather than delineating an agenda aimed at improving policies to promote productivity, economic development and social cohesion throughout the economy, the document is overly focused on preserving certain regulations adopted at the dawn of the Internet age (when the internet was capitalized). Even more disappointing given the IA member companies’ central role in our contemporary lives, the document evinces no consideration of how Internet platforms themselves should strive to balance rights and responsibilities in new ways that promote meaningful internet freedom.

In short, the IA’s Roadmap constitutes a policy framework dutifully constructed to enable its members to maintain the status quo. While that might also serve to further some broader social aims, it’s difficult to see in the approach anything other than a defense of what got us here — not where we go from here.

To take one important example, the document reiterates the IA’s longstanding advocacy for the preservation of the online-intermediary safe harbors of the 20 year-old Digital Millennium Copyright Act (“DMCA”) — which were adopted during the era of dial-up, and before any of the principal members of the Internet Association even existed. At the same time, however, it proposes to reform one piece of legislation — the Electronic Communications Privacy Act (“ECPA”) — precisely because, at 30 years old, it has long since become hopelessly out of date. But surely if outdatedness is a justification for asserting the inappropriateness of existing privacy/surveillance legislation — as seems proper, given the massive technological and social changes surrounding privacy — the same concern should apply to copyright legislation with equal force, given the arguably even-more-substantial upheavals in the economic and social role of creative content in society today.

Of course there “is more certainty in reselling the past, than inventing the future,” but a truly valuable roadmap for the future from some of the most powerful and visionary companies in America should begin to tackle some of the most complicated and nuanced questions facing our country. It would be nice to see a Roadmap premised upon a well-articulated theory of accountability across all of the Internet ecosystem in ways that protect property, integrity, choice and other essential aspects of modern civil society.

Each of IA’s companies was principally founded on a vision of improving some aspect of the human condition; in many respects they have succeeded. But as society changes, even past successes may later become inconsistent with evolving social mores and economic conditions, necessitating thoughtful introspection and, often, policy revision. The IA can do better than pick and choose from among existing policies based on unilateral advantage and a convenient repudiation of responsibility.

Neil TurkewitzTruth on the Market is delighted to welcome our newest blogger, Neil Turkewitz. Neil is the newly minted Senior Policy Counsel at the International Center for Law & Economics (so we welcome him to ICLE, as well!).

Prior to joining ICLE, Neil spent 30 years at the Recording Industry Association of America (RIAA), most recently as Executive Vice President, International.

Neil has spent most of his career working to expand economic opportunities for the music industry through modernization of copyright legislation and effective enforcement in global markets. He has worked closely with creative communities around the globe, with the US and foreign governments, and with international organizations (including WIPO and the WTO), to promote legal and enforcement reforms to respond to evolving technology, and to promote a balanced approach to digital trade and Internet governance premised upon the importance of regulatory coherence, elimination of inefficient barriers to global communications, and respect for Internet freedom and the rule of law.

Among other things, Neil was instrumental in the negotiation of the WTO TRIPS Agreement, worked closely with the US and foreign governments in the negotiation of free trade agreements, helped to develop the OECD’s Communique on Principles for Internet Policy Making, coordinated a global effort culminating in the production of the WIPO Internet Treaties, served as a formal advisor to the Secretary of Commerce and the USTR as Vice-Chairman of the Industry Trade Advisory Committee on Intellectual Property Rights, and served as a member of the Board of the Chamber of Commerce’s Global Intellectual Property Center.

You can read some of his thoughts on Internet governance, IP, and international trade here and here.

Welcome Neil!

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Imagine if you will… that a federal regulatory agency were to decide that the iPhone ecosystem was too constraining and too expensive; that consumers — who had otherwise voted for iPhones with their dollars — were being harmed by the fact that the platform was not “open” enough.

Such an agency might resolve (on the basis of a very generous reading of a statute), to force Apple to make its iOS software available to any hardware platform that wished to have it, in the process making all of the apps and user data accessible to the consumer via these new third parties, on terms set by the agency… for free.

Difficult as it may be to picture this ever happening, it is exactly the sort of Twilight Zone scenario that FCC Chairman Tom Wheeler is currently proposing with his new set-top box proposal.

Based on the limited information we have so far (a fact sheet and an op-ed), Chairman Wheeler’s new proposal does claw back some of the worst excesses of his initial draft (which we critiqued in our comments and reply comments to that proposal).

But it also appears to reinforce others — most notably the plan’s disregard for the right of content creators to control the distribution of their content. Wheeler continues to dismiss the complex business models, relationships, and licensing terms that have evolved over years of competition and innovation. Instead, he offers  a one-size-fits-all “solution” to a “problem” that market participants are already falling over themselves to provide.

Plus ça change…

To begin with, Chairman Wheeler’s new proposal is based on the same faulty premise: that consumers pay too much for set-top boxes, and that the FCC is somehow both prescient enough and Congressionally ordained to “fix” this problem. As we wrote in our initial comments, however,

[a]lthough the Commission asserts that set-top boxes are too expensive, the history of overall MVPD prices tells a remarkably different story. Since 1994, per-channel cable prices including set-top box fees have fallen by 2 percent, while overall consumer prices have increased by 54 percent. After adjusting for inflation, this represents an impressive overall price decrease.

And the fact is that no one buys set-top boxes in isolation; rather, the price consumers pay for cable service includes the ability to access that service. Whether the set-top box fee is broken out on subscribers’ bills or not, the total price consumers pay is unlikely to change as a result of the Commission’s intervention.

As we have previously noted, the MVPD set-top box market is an aftermarket; no one buys set-top boxes without first (or simultaneously) buying MVPD service. And as economist Ben Klein (among others) has shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive:

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

Engineering the set-top box aftermarket to bring more direct competition to bear may redistribute profits, but it’s unlikely to change what consumers pay.

Stripped of its questionable claims regarding consumer prices and placed in the proper context — in which consumers enjoy more ways to access more video content than ever before — Wheeler’s initial proposal ultimately rested on its promise to “pave the way for a competitive marketplace for alternate navigation devices, and… end the need for multiple remote controls.” Weak sauce, indeed.

He now adds a new promise: that “integrated search” will be seamlessly available for consumers across the new platforms. But just as universal remotes and channel-specific apps on platforms like Apple TV have already made his “multiple remotes” promise a hollow one, so, too, have competitive pressures already begun to deliver integrated search.

Meanwhile, such marginal benefits come with a host of substantial costs, as others have pointed out. Do we really need the FCC to grant itself more powers and create a substantial and coercive new regulatory regime to mandate what the market is already poised to provide?

From ignoring copyright to obliterating copyright

Chairman Wheeler’s first proposal engendered fervent criticism for the impossible position in which it placed MVPDs — of having to disregard, even outright violate, their contractual obligations to content creators.

Commendably, the new proposal acknowledges that contractual relationships between MVPDs and content providers should remain “intact.” Thus, the proposal purports to enable programmers and MVPDs to maintain “their channel position, advertising and contracts… in place.” MVPDs will retain “end-to-end” control of the display of content through their apps, and all contractually guaranteed content protection mechanisms will remain, because the “pay-TV’s software will manage the full suite of linear and on-demand programming licensed by the pay-TV provider.”

But, improved as it is, the new proposal continues to operate in an imagined world where the incredibly intricate and complex process by which content is created and distributed can be reduced to the simplest of terms, dictated by a regulator and applied uniformly across all content and all providers.

According to the fact sheet, the new proposal would “[p]rotect[] copyrights and… [h]onor[] the sanctity of contracts” through a “standard license”:

The proposed final rules require the development of a standard license governing the process for placing an app on a device or platform. A standard license will give device manufacturers the certainty required to bring innovative products to market… The license will not affect the underlying contracts between programmers and pay-TV providers. The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.

But programming is distributed under a diverse range of contract terms. The only way a single, “standard license” could possibly honor these contracts is by forcing content providers to license all of their content under identical terms.

Leaving aside for a moment the fact that the FCC has no authority whatever to do this, for such a scheme to work, the agency would necessarily have to strip content holders of their right to govern the terms on which their content is accessed. After all, if MVPDs are legally bound to redistribute content on fixed terms, they have no room to permit content creators to freely exercise their rights to specify terms like windowing, online distribution restrictions, geographic restrictions, and the like.

In other words, the proposal simply cannot deliver on its promise that “[t]he license will not affect the underlying contracts between programmers and pay-TV providers.”

But fear not: According to the Fact Sheet, “[p]rogrammers will have a seat at the table to ensure that content remains protected.” Such largesse! One would be forgiven for assuming that the programmers’ (single?) seat will surrounded by those of other participants — regulatory advocates, technology companies, and others — whose sole objective will be to minimize content companies’ ability to restrict the terms on which their content is accessed.

And we cannot ignore the ominous final portion of the Fact Sheet’s “Standard License” description: “The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.” Such an arrogation of ultimate authority by the FCC doesn’t bode well for that programmer’s “seat at the table” amounting to much.

Unfortunately, we can only imagine the contours of the final proposal that will describe the many ways by which distribution licenses can “harm the marketplace for competitive devices.” But an educated guess would venture that there will be precious little room for content creators and MVPDs to replicate a large swath of the contract terms they currently employ. “Any content owner can have its content painted any color that it wants, so long as it is black.”

At least we can take solace in the fact that the FCC has no authority to do what Wheeler wants it to do

And, of course, this all presumes that the FCC will be able to plausibly muster the legal authority in the Communications Act to create what amounts to a de facto compulsory licensing scheme.

A single license imposed upon all MVPDs, along with the necessary restrictions this will place upon content creators, does just as much as an overt compulsory license to undermine content owners’ statutory property rights. For every license agreement that would be different than the standard agreement, the proposed standard license would amount to a compulsory imposition of terms that the rights holders and MVPDs would not otherwise have agreed to. And if this sounds tedious and confusing, just wait until the Commission starts designing its multistakeholder Standard Licensing Oversight Process (“SLOP”)….

Unfortunately for Chairman Wheeler (but fortunately for the rest of us), the FCC has neither the legal authority, nor the requisite expertise, to enact such a regime.

Last month, the Copyright Office was clear on this score in its letter to Congress commenting on the Chairman’s original proposal:  

[I]t is important to remember that only Congress, through the exercise of its power under the Copyright Clause, and not the FCC or any other agency, has the constitutional authority to create exceptions and limitations in copyright law. While Congress has enacted compulsory licensing schemes, they have done so in response to demonstrated market failures, and in a carefully circumscribed manner.

Assuming that Section 629 of the Communications Act — the provision that otherwise empowers the Commission to promote a competitive set-top box market — fails to empower the FCC to rewrite copyright law (which is assuredly the case), the Commission will be on shaky ground for the inevitable torrent of lawsuits that will follow the revised proposal.

In fact, this new proposal feels more like an emergency pivot by a panicked Chairman than an actual, well-grounded legal recommendation. While the new proposal improves upon the original, it retains at its core the same ill-informed, ill-advised and illegal assertion of authority that plagued its predecessor.

Last week the International Center for Law & Economics and I filed an amicus brief in the DC Circuit in support of en banc review of the court’s decision to uphold the FCC’s 2015 Open Internet Order.

In our previous amicus brief before the panel that initially reviewed the OIO, we argued, among other things, that

In order to justify its Order, the Commission makes questionable use of important facts. For instance, the Order’s ban on paid prioritization ignores and mischaracterizes relevant record evidence and relies on irrelevant evidence. The Order also omits any substantial consideration of costs. The apparent necessity of the Commission’s aggressive treatment of the Order’s factual basis demonstrates the lengths to which the Commission must go in its attempt to fit the Order within its statutory authority.

Our brief supporting en banc review builds on these points to argue that

By reflexively affording substantial deference to the FCC in affirming the Open Internet Order (“OIO”), the panel majority’s opinion is in tension with recent Supreme Court precedent….

The panel majority need not have, and arguably should not have, afforded the FCC the level of deference that it did. The Supreme Court’s decisions in State Farm, Fox, and Encino all require a more thorough vetting of the reasons underlying an agency change in policy than is otherwise required under the familiar Chevron framework. Similarly, Brown and Williamson, Utility Air Regulatory Group, and King all indicate circumstances in which an agency construction of an otherwise ambiguous statute is not due deference, including when the agency interpretation is a departure from longstanding agency understandings of a statute or when the agency is not acting in an expert capacity (e.g., its decision is based on changing policy preferences, not changing factual or technical considerations).

In effect, the panel majority based its decision whether to afford the FCC deference upon deference to the agency’s poorly supported assertions that it was due deference. We argue that this is wholly inappropriate in light of recent Supreme Court cases.

Moreover,

The panel majority failed to appreciate the importance of granting Chevron deference to the FCC. That importance is most clearly seen at an aggregate level. In a large-scale study of every Court of Appeals decision between 2003 and 2013, Professors Kent Barnett and Christopher Walker found that a court’s decision to defer to agency action is uniquely determinative in cases where, as here, an agency is changing established policy.

Kent Barnett & Christopher J. Walker, Chevron In the Circuit Courts 61, Figure 14 (2016), available at ssrn.com/abstract=2808848.

Figure 14 from Barnett & Walker, as reproduced in our brief.

As  that study demonstrates,

agency decisions to change established policy tend to present serious, systematic defects — and [thus that] it is incumbent upon this court to review the panel majority’s decision to reflexively grant Chevron deference. Further, the data underscore the importance of the Supreme Court’s command in Fox and Encino that agencies show good reason for a change in policy; its recognition in Brown & Williamson and UARG that departures from existing policy may fall outside of the Chevron regime; and its command in King that policies not made by agencies acting in their capacity as technical experts may fall outside of the Chevron regime. In such cases, the Court essentially holds that reflexive application of Chevron deference may not be appropriate because these circumstances may tend toward agency action that is arbitrary, capricious, in excess of statutory authority, or otherwise not in accordance with law.

As we conclude:

The present case is a clear example where greater scrutiny of an agency’s decision-making process is both warranted and necessary. The panel majority all too readily afforded the FCC great deference, despite the clear and unaddressed evidence of serious flaws in the agency’s decision-making process. As we argued in our brief before the panel, and as Judge Williams recognized in his partial dissent, the OIO was based on factually inaccurate, contradicted, and irrelevant record evidence.

Read our full — and very short — amicus brief here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.