Archives For API

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Christine S. Wilson (Commissioner of the U.S. Federal Trade Commission).[1] The views expressed here are the author’s and do not necessarily reflect those of the Federal Trade Commission or any other Commissioner.]  

I type these words while subject to a stay-at-home order issued by West Virginia Governor James C. Justice II. “To preserve public health and safety, and to ensure the healthcare system in West Virginia is capable of serving all citizens in need,” I am permitted to leave my home only for a limited and precisely enumerated set of reasons. Billions of citizens around the globe are now operating under similar shelter-in-place directives as governments grapple with how to stem the tide of infection, illness and death inflicted by the global Covid-19 pandemic. Indeed, the first response of many governments has been to impose severe limitations on physical movement to contain the spread of the novel coronavirus. The second response contemplated by many, and the one on which this blog post focuses, involves the extensive collection and analysis of data in connection with people’s movements and health. Some governments are using that data to conduct sophisticated contact tracing, while others are using the power of the state to enforce orders for quarantines and against gatherings.

The desire to use modern technology on a broad scale for the sake of public safety is not unique to this moment. Technology is intended to improve the quality of our lives, in part by enabling us to help ourselves and one another. For example, cell towers broadcast wireless emergency alerts to all mobile devices in the area to warn us of extreme weather and other threats to safety in our vicinity. One well-known type of broadcast is the Amber Alert, which enables community members to assist in recovering an abducted child by providing descriptions of the abductor, the abductee and the abductor’s vehicle. Citizens who spot individuals and vehicles that meet these descriptions can then provide leads to law enforcement authorities. A private nonprofit organization, the National Center for Missing and Exploited Children, coordinates with state and local public safety officials to send out Amber Alerts through privately owned wireless carriers.

The robust civil society and free market in the U.S. make partnerships between the private sector and government agencies commonplace. But some of these arrangements involve a much more extensive sharing of Americans’ personal information with law enforcement than the emergency alert system does.

For example, Amazon’s home security product Ring advertises itself not only as a way to see when a package has been left at your door, but also as a way to make communities safer by turning over video footage to local police departments. In 2018, the company’s pilot program in Newark, New Jersey, donated more than 500 devices to homeowners to install at their homes in two neighborhoods, with a big caveat. Ring recipients were encouraged to share video with police. According to Ring, home burglaries in those neighborhoods fell by more than 50% from April through July 2018 relative to the same time period a year earlier.

Yet members of Congress and privacy experts have raised concerns about these partnerships, which now number in the hundreds. After receiving Amazon’s response to his inquiry, Senator Edward Markey highlighted Ring’s failure to prevent police from sharing video footage with third parties and from keeping the video permanently, and Ring’s lack of precautions to ensure that users collect footage only of adults and of users’ own property. The House of Representatives Subcommittee on Economic and Consumer Policy continues to investigate Ring’s police partnerships and data policies. The Electronic Frontier Foundation has called Ring “a perfect storm of privacy threats,” while the UK surveillance camera commissioner has warned against “a very real power to understand, to surveil you in a way you’ve never been surveilled before.”

Ring demonstrates clearly that it is not new for potential breaches of privacy to be encouraged in the name of public safety; police departments urge citizens to use Ring and share the videos with police to fight crime. But emerging developments indicate that, in the fight against Covid-19, we can expect to see more and more private companies placed in the difficult position of becoming complicit in government overreach.

At least mobile phone users can opt out of receiving Amber Alerts, and residents can refuse to put Ring surveillance systems on their property. The Covid-19 pandemic has made some other technological intrusions effectively impossible to refuse. For example, online proctors who monitor students over webcams to ensure they do not cheat on exams taken at home were once something that students could choose to accept if they did not want to take an exam where and when they could be proctored face to face. With public schools and universities across the U.S. closed for the rest of the semester, students who refuse to give private online proctors access to their webcams – and, consequently, the ability to view their surroundings – cannot take exams at all.

Existing technology and data practices already have made the Federal Trade Commission sensitive to potential consumer privacy and data security abuses. For decades, this independent, bipartisan agency has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. It brought its first privacy and data security cases nearly 20 years ago, while I was Chief of Staff to then-Chairman Timothy J. Muris. The FTC took on Eli Lilly for disclosing the e-mail addresses of 669 subscribers to its Prozac reminder service – many of whom were government officials, and at a time of greater stigma for mental health issues – and Microsoft for (among other things) falsely claiming that its Passport website sign-in service did not collect any personally identifiable information other than that described in its privacy policy.

The privacy and data security practices of healthcare and software companies are likely to impact billions of people during the current coronavirus pandemic. The U.S. already has many laws on the books that are relevant to practices in these areas. One notable example is the Health Insurance Portability and Accountability Act, which set national standards for the protection of individually identifiable health information by health plans, health care clearinghouses and health care providers who accept non-cash payments. While the FTC does not enforce HIPAA, it does enforce the Health Breach Notification Rule, as well as the provisions in the FTC Act used to challenge the privacy missteps of Eli Lilly and many other companies.

But technological developments have created gaps in HIPAA enforcement. For example, HIPAA applies to doctors’ offices, hospitals and insurance companies, but it may not apply to wearables, smartphone apps or websites. Yet sensitive medical information is now commonly stored in places other than health care practitioners’ offices.  Your phone and watch now collect information about your blood sugar, exercise habits, fertility and heart health. 

Observers have pointed to these emerging gaps in coverage as evidence of the growing need for federal privacy legislation. I, too, have called on the U.S. Congress to enact comprehensive federal privacy legislation – not only to address these emerging gaps, but for two other reasons.  First, consumers need clarity regarding the types of data collected from them, and how those data are used and shared. I believe consumers can make informed decisions about which goods and services to patronize when they have the information they need to evaluate the costs and benefits of using those goods. Second, businesses need predictability and certainty regarding the rules of the road, given the emerging patchwork of regimes both at home and abroad.

Rules of the road regarding privacy practices will prove particularly instructive during this global pandemic, as governments lean on the private sector for data on the grounds that the collection and analysis of data can help avert (or at least diminish to some extent) a public health catastrophe. With legal lines in place, companies would be better equipped to determine when they are being asked to cross the line for the public good, and whether they should require a subpoena or inform customers before turning over data. It is regrettable that Congress has been unable to enact federal privacy legislation to guide this discussion.

Understandably, Congress does not have privacy at the top of its agenda at the moment, as the U.S. faces a public health crisis. As I write, more than 579,000 Americans have been diagnosed with Covid-19, and more than 22,000 have perished. Sadly, those numbers will only increase. And the U.S. is not alone in confronting this crisis: governments globally have confronted more than 1.77 million cases and more than 111,000 deaths. For a short time, health and safety issues may take precedence over privacy protections. But some of the initiatives to combat the coronavirus pandemic are worrisome. We are learning more every day about how governments are responding in a rapidly developing situation; what I describe in the next section constitutes merely the tip of the iceberg. These initiatives are worth highlighting here, as are potential safeguards for privacy and civil liberties that societies around the world would be wise to embrace.

Some observers view public/private partnerships based on an extensive use of technology and data as key to fighting the spread of Covid-19. For example, Professor Jane Bambauer calls for contact tracing and alerts “to be done in an automated way with the help of mobile service providers’ geolocation data.” She argues that privacy is merely “an instrumental right” that “is meant to achieve certain social goals in fairness, safety and autonomy. It is not an end in itself.” Given the “more vital” interests in health and the liberty to leave one’s house, Bambauer sees “a moral imperative” for the private sector “to ignore even express lack of consent” by an individual to the sharing of information about him.

This proposition troubles me because the extensive data sharing that has been proposed in some countries, and that is already occurring in many others, is not mundane. In the name of advertising and product improvements, private companies have been hoovering up personal data for years. What this pandemic lays bare, though, is that while this trove of information was collected under the guise of cataloguing your coffee preferences and transportation habits, it can be reprocessed in an instant to restrict your movements, impinge on your freedom of association, and silence your freedom of speech. Bambauer is calling for detailed information about an individual’s every movement to be shared with the government when, in the United States under normal circumstances, a warrant would be required to access this information.

Indeed, with our mobile devices acting as the “invisible policeman” described by Justice William O. Douglas in Berger v. New York, we may face “a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment.” Backward-looking searches and data hoards pose new questions of what constitutes a “reasonable” search. The stakes are high – both here and abroad, citizens are being asked to allow warrantless searches by the government on an astronomical scale, all in the name of public health.  

Abroad

The first country to confront the coronavirus was China. The World Health Organization has touted the measures taken by China as “the only measures that are currently proven to interrupt or minimize transmission chains in humans.” Among these measures are the “rigorous tracking and quarantine of close contacts,” as well as “the use of big data and artificial intelligence (AI) to strengthen contact tracing and the management of priority populations.” An ambassador for China has said his government “optimized the protocol of case discovery and management in multiple ways like backtracking the cell phone positioning.” Much as the Communist Party’s control over China enabled it to suppress early reports of a novel coronavirus, this regime vigorously ensured its people’s compliance with the “stark” containment measures described by the World Health Organization.

Before the Covid-19 pandemic, Hong Kong already had been testing the use of “smart wristbands” to track the movements of prisoners. The Special Administrative Region now monitors people quarantined inside their homes by requiring them to wear wristbands that send information to the quarantined individuals’ smartphones and alert the Department of Health and Police if people leave their homes, break their wristbands or disconnect them from their smartphones. When first announced in early February, the wristbands were required only for people who had been to Wuhan in the past 14 days, but the program rapidly expanded to encompass every person entering Hong Kong. The government denied any privacy concerns about the electronic wristbands, saying the Privacy Commissioner for Personal Data had been consulted about the technology and agreed it could be used to ensure that quarantined individuals remain at home.

Elsewhere in Asia, Taiwan’s Chunghwa Telecom has developed a system that the local CDC calls an “electronic fence.” Specifically, the government obtains the SIM card identifiers for the mobile devices of quarantined individuals and passes those identifiers to mobile network operators, which use phone signals to their cell towers to alert public health and law enforcement agencies when the phone of a quarantined individual leaves a certain geographic range. In response to privacy concerns, the National Communications Commission said the system was authorized by special laws to prevent the coronavirus, and that it “does not violate personal data or privacy protection.” In Singapore, travelers and others issued Stay-Home Notices to remain in their residency 24 hours a day for 14 days must respond within an hour if contacted by government agencies by phone, text message or WhatsApp. And to assist with contact tracing, the government has encouraged everyone in the country to download TraceTogether, an app that uses Bluetooth to identify other nearby phones with the app and tracks when phones are in close proximity.

Israel’s Ministry of Health has launched an app for mobile devices called HaMagen (the shield) to prevent the spread of coronavirus by identifying contacts between diagnosed patients and people who came into contact with them in the 14 days prior to diagnosis. In March, the prime minister’s cabinet initially bypassed the legislative body to approve emergency regulations for obtaining without a warrant the cellphone location data and additional personal information of those diagnosed with or suspected of coronavirus infection. The government will send text messages to people who came into contact with potentially infected individuals, and will monitor the potentially infected person’s compliance with quarantine. The Ministry of Health will not hold this information; instead, it can make data requests to the police and Shin Bet, the Israel Security Agency. The police will enforce quarantine measures and Shin Bet will track down those who came into contact with the potentially infected.

Multiple Eastern European nations with constitutional protections for citizens’ rights of movement and privacy have superseded them by declaring a state of emergency. For example, in Hungary the declaration of a “state of danger” has enabled Prime Minister Viktor Orbán’s government to engage in “extraordinary emergency measures” without parliamentary consent.  His ministers have cited the possibility that coronavirus will prevent a gathering of a sufficient quorum of members of Parliament as making it necessary for the government to be able to act in the absence of legislative approval.

Member States of the European Union must protect personal data pursuant to the General Data Protection Regulation, and communications data, such as mobile location, pursuant to the ePrivacy Directive. The chair of the European Data Protection Board has observed that the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security. But if those measures allow for the processing of non-anonymized location data from mobile devices, individuals must have safeguards such as a right to a judicial remedy. “Invasive measures, such as the ‘tracking’ of individuals (i.e. processing of historical non-anonymized location data) could be considered proportional under exceptional circumstances and depending on the concrete modalities of the processing.” The EDPB has announced it will prioritize guidance on these issues.

EU Member States are already implementing such public security measures. For example, the government of Poland has by statute required everyone under a quarantine order due to suspected infection to download the “Home Quarantine” smartphone app. Those who do not install and use the app are subject to a fine. The app verifies users’ compliance with quarantine through selfies and GPS data. Users’ personal data will be administered by the Minister of Digitization, who has appointed a data protection officer. Each user’s identification, name, telephone number, quarantine location and quarantine end date can be shared with police and other government agencies. After two weeks, if the user does not report symptoms of Covid-19, the account will be deactivated — but the data will be stored for six years. The Ministry of Digitization claims that it must store the data for six years in case users pursue claims against the government. However, local privacy expert and Panoptykon Foundation cofounder Katarzyna Szymielewicz has questioned this rationale.

Even other countries that are part of the Anglo-American legal tradition are ramping up their use of data and working with the private sector to do so. The UK’s National Health Service is developing a data store that will include online/call center data from NHS Digital and Covid-19 test result data from the public health agency. While the NHS is working with private partner organizations and companies including Microsoft, Palantir Technologies, Amazon Web Services and Google, it has promised to keep all the data under its control, and to require those partners to destroy or return the data “once the public health emergency situation has ended.” The NHS also has committed to meet the requirements of data protection legislation by ensuring that individuals cannot be re-identified from the data in the data store.

Notably, each of the companies partnering with the NHS at one time or another has been subjected to scrutiny for its privacy practices. Some observers have noted that tech companies, which have been roundly criticized for a variety of reasons in recent years, may seek to use this pandemic for “reputation laundering.” As one observer cautioned: “Reputations matter, and there’s no reason the government or citizens should cast bad reputations aside when choosing who to work with or what to share” during this public health crisis.

At home

In the U.S., the federal government last enforced large-scale isolation and quarantine measures during the influenza (“Spanish Flu”) pandemic a century ago. But the Centers for Disease Control and Prevention track diseases on a daily basis by receiving case notifications from every state. The states mandate that healthcare providers and laboratories report certain diseases to the local public health authorities using personal identifiers. In other words, if you test positive for coronavirus, the government will know. Every state has laws authorizing quarantine and isolation, usually through the state’s health authority, while the CDC has authority through the federal Public Health Service Act and a series of presidential executive orders to exercise quarantine and isolation powers for specific diseases, including severe acute respiratory syndromes (a category into which the novel coronavirus falls).

Now local governments are issuing orders that empower law enforcement to fine and jail Americans for failing to practice social distancing. State and local governments have begun arresting and charging people who violate orders against congregating in groups. Rhode Island is requiring every non-resident who enters the state to be quarantined for two weeks, with police checks at the state’s transportation hubs and borders.

How governments discover violations of quarantine and social distancing orders will raise privacy concerns. Police have long been able to enforce based on direct observation of violations. But if law enforcement authorities identify violations of such orders based on data collection rather than direct observation, the Fourth Amendment may be implicated. In Jones and Carpenter, the Supreme Court has limited the warrantless tracking of Americans through GPS devices placed on their cars and through cellphone data. But building on the longstanding practice of contact tracing in fighting infectious diseases such as tuberculosis, GPS data has proven helpful in fighting the spread of Covid-19. This same data, though, also could be used to piece together evidence of violations of stay-at-home orders. As Chief Justice John Roberts wrote in Carpenter, “With access to [cell-site location information], the government can now travel back in time to retrace a person’s whereabouts… Whoever the suspect turns out to be, he has effectively been tailed every moment of every day for five years.”

The Fourth Amendment protects American citizens from government action, but the “reasonable expectation of privacy” test applied in Fourth Amendment cases connects the arenas of government action and commercial data collection. As Professor Paul Ohm of the Georgetown University Law Center notes, “the dramatic expansion of technologically-fueled corporate surveillance of our private lives automatically expands police surveillance too, thanks to the way the Supreme Court has construed the reasonable expectation of privacy test and the third-party doctrine.”

For example, the COVID-19 Mobility Data Network – infectious disease epidemiologists working with Facebook, Camber Systems and Cubiq – uses mobile device data to inform state and local governments about whether social distancing orders are effective. The tech companies give the researchers aggregated data sets; the researchers give daily situation reports to departments of health, but say they do not share the underlying data sets with governments. The researchers have justified this model based on users of the private companies’ apps having consented to the collection and sharing of data.

However, the assumption that consumers have given informed consent to the collection of their data (particularly for the purpose of monitoring their compliance with social isolation measures during a pandemic) is undermined by studies showing the average consumer does not understand all the different types of data that are collected and how their information is analyzed and shared with third parties – including governments. Technology and telecommunications companies have neither asked me to opt into tracking for public health nor made clear how they are partnering with federal, state and local governments. This practice highlights that data will be divulged in ways consumers cannot imagine – because no one assumed a pandemic when agreeing to a company’s privacy policy. This information asymmetry is part of why we need federal privacy legislation.

On Friday afternoon, Apple and Google announced their opt-in Covid-19 contact tracing technology. The owners of the two most common mobile phone operating systems in the U.S. said that in May they would release application programming interfaces that enable interoperability between iOS and Android devices using official contact tracing apps from public health authorities. At an unspecified date, Bluetooth-based contact tracing will be built directly into the operating systems. “Privacy, transparency, and consent are of utmost importance in this effort,” the companies said in their press release.  

At this early stage, we do not yet know exactly how the proposed Google/Apple contact tracing system will operate. It sounds similar to Singapore’s TraceTogether, which is already available in the iOS and Android mobile app stores (it has a 3.3 out of 5 average rating in the former and a 4.0 out of 5 in the latter). TraceTogether is also described as a voluntary, Bluetooth-based system that avoids GPS location data, does not upload information without the user’s consent, and uses changing, encrypted identifiers to maintain user anonymity. Perhaps the most striking difference, at least to a non-technical observer, is that TraceTogether was developed and is run by the Singaporean government, which has been a point of concern for some observers. The U.S. version – like finding abducted children through Amber Alerts and fighting crime via Amazon Ring – will be a partnership between the public and private sectors.     

Recommendations

The global pandemic we now face is driving data usage in ways not contemplated by consumers. Entities in the private and public sector are confronting new and complex choices about data collection, usage and sharing. Organizations with Chief Privacy Officers, Chief Information Security Officers, and other personnel tasked with managing privacy programs are, relatively speaking, well-equipped to address these issues. Despite the extraordinary circumstances, senior management should continue to rely on the expertise and sound counsel of their CPOs and CISOs, who should continue to make decisions based on their established privacy and data security programs. Although developments are unfolding at warp speed, it is important – arguably now, more than ever – to be intentional about privacy decisions.

For organizations that lack experience with privacy and data security programs (and individuals tasked with oversight for these areas), now is a great time to pause, do some research and exercise care. It is essential to think about the longer-term ramifications of choices made about data collection, use and sharing during the pandemic. The FTC offers easily accessible resources, including Protecting Personal Information: A Guide for Business, Start with Security: A Guide for Business, and Stick with Security: A Business Blog Series. While the Gramm-Leach-Bliley Act (GLB) applies only to financial institutions, the FTC’s GLB compliance blog outlines some data security best practices that apply more broadly. The National Institute for Standards and Technology (NIST) also offers security and privacy resources, including a privacy framework to help organizations identify and manage privacy risks. Private organizations such as the Center for Information Policy Leadership, the International Association of Privacy Professionals and the App Association also offer helpful resources, as do trade associations. While it may seem like a suboptimal time to take a step back and focus on these strategic issues, remember that privacy and data security missteps can cause irrevocable harm. Counterintuitively, now is actually the best time to be intentional about choices in these areas.

Best practices like accountability, risk assessment and risk management will be key to navigating today’s challenges. Companies should take the time to assess and document the new and/or expanded risks from the data collection, use and sharing of personal information. It is appropriate for these risk assessments to incorporate potential benefits and harms not only to the individual and the company, but for society as a whole. Upfront assessments can help companies establish controls and incentives to facilitate responsible behavior, as well as help organizations demonstrate that they are fully aware of the impact of their choices (risk assessment) and in control of their impact on people and programs (risk mitigation). Written assessments can also facilitate transparency with stakeholders, raise awareness internally about policy choices and assist companies with ongoing monitoring and enforcement. Moreover, these assessments will facilitate a return to “normal” data practices when the crisis has passed.  

In a similar vein, companies must engage in comprehensive vendor management with respect to the entities that are proposing to use and analyze their data. In addition to vetting proposed data recipients thoroughly, companies must be selective concerning the categories of information shared. The benefits of the proposed research must be balanced against individual protections, and companies should share only those data necessary to achieve the stated goals. To the extent feasible, data should be shared in de-identified and aggregated formats and data recipients should be subject to contractual obligations prohibiting them from re-identification. Moreover, companies must have policies in place to ensure compliance with research contracts, including data deletion obligations and prohibitions on data re-identification, where appropriate. Finally, companies must implement mechanisms to monitor third party compliance with contractual obligations.

Similar principles of necessity and proportionality should guide governments as they make demands or requests for information from the private sector. Governments must recognize the weight with which they speak during this crisis and carefully balance data collection and usage with civil liberties. In addition, governments also have special obligations to ensure that any data collection done by them or at their behest is driven by the science of Covid-19; to be transparent with citizens about the use of data; and to provide due process for those who wish to challenge limitations on their rights. Finally, government actors should apply good data hygiene, including regularly reassessing the breadth of their data collection initiatives and incorporating data retention and deletion policies. 

In theory, government’s role could be reduced as market-driven responses emerge. For example, assuming the existence of universally accessible daily coronavirus testing with accurate results even during the incubation period, Hal Singer’s proposal for self-certification of non-infection among private actors is intriguing. Thom Lambert identified the inability to know who is infected as a “lemon problem;” Singer seeks a way for strangers to verify each other’s “quality” in the form of non-infection.

Whatever solutions we may accept in a pandemic, it is imperative to monitor the coronavirus situation as it improves, to know when to lift the more dire measures. Former Food and Drug Administration Commissioner Scott Gottlieb and other observers have called for maintaining surveillance because of concerns about a resurgence of the virus later this year. For any measures that conflict with Americans’ constitutional rights to privacy and freedom of movement, there should be metrics set in advance for the conditions that will indicate when such measures are no longer justified. In the absence of pre-determined metrics, governments may feel the same temptation as Hungary’s prime minister to keep renewing a “state of danger” that overrides citizens’ rights. As Slovak lawmaker Tomas Valasek has said, “It doesn’t just take the despots and the illiberals of this world, like Orbán, to wreak damage.” But privacy is not merely instrumental to other interests, and we do not have to sacrifice our right to it indefinitely in exchange for safety.

I recognize that halting the spread of the virus will require extensive and sustained effort, and I credit many governments with good intentions in attempting to save the lives of their citizens. But I refuse to accept that we must sacrifice privacy to reopen the economy. It seems a false choice to say that I must sacrifice my Constitutional rights to privacy, freedom of association and free exercise of religion for another’s freedom of movement. Society should demand that equity, fairness and autonomy be respected in data uses, even in a pandemic. To quote Valasek again: “We need to make sure that we don’t go a single inch further than absolutely necessary in curtailing civil liberties in the name of fighting for public health.” History has taught us repeatedly that sweeping security powers granted to governments during an emergency persist long after the crisis has abated. To resist the gathering momentum toward this outcome, I will continue to emphasize the FTC’s learning on appropriate data collection and use. But my remit as an FTC Commissioner is even broader – when I was sworn in on Sept. 26, 2018, I took an oath to “support and defend the Constitution of the United States” – and so I shall.


[1] Many thanks to my Attorney Advisors Pallavi Guniganti and Nina Frant for their invaluable assistance in preparing this article.

This guest post is by Patrick Todd, an England-qualified solicitor and author on competition law/policy in digital markets.

The above quote is not about Democrat-nominee hopeful Elizabeth Warren’s policy views on sport. It is in fact an analogy to her proposal of splitting Google, Amazon, Facebook and Apple (“GAFA”) apart from their respective ancillary lines of business, a solution to one of the current hot topics in antitrust law, namely the alleged practice of GAFA exploiting the popularity of their platforms to gain competitive advantages in neighboring markets. Can a “referee” favor its own “players” in the digital platform game? Can we blame the “referee” if one “player” knocks out another? Should the “referee” be forced to intervene to protect said “player”? The analogy reflects a growing concern that platform owners’ entry into adjacent markets that are, or theoretically could be, served by independent firms creates an irreconcilable misalignment between the interests of users, independent companies and platform owners. As Margrethe Vestager, European Competition Commissioner and Vice-President of the European Commission (“EC”), has said:

[O]ne of the biggest issues we face is with platform businesses that also compete in other markets, with companies that depend on the platform. That means that the very same business becomes both player and referee, competing with others that rely on the platform, but also setting the rules that govern that competition.

Whether and to what extent successful firms in digital markets can enter and compete in neighboring markets, utilizing their existing expertise, has matured into an existential question that plagues and polarizes the antitrust community. Perhaps the most famous and debated case is the EC’s 2017 decision in Google Shopping, where it concluded that Google’s preferential placement of its comparison shopping results in a special box at the top of its search pages constituted an abuse of a dominant position under Article 102 TFEU. The EC found that such prominent placement, coupled with the denial of access to the box for rival price comparison websites, had the effect of driving traffic to Google’s own shopping site, depriving Google’s rivals of user-traffic. Google is strongly contesting both the facts and theory underpinning this decision in its appeal, the hearing for which took place in February. Meanwhile, complaints in relation to Google’s similar treatment of its other ancillary services, such as vacation rentals, have followed suit. Similar allegations have been made against Apple (see e.g. here), Amazon (see e.g. here) and Facebook (see e.g. here) for the way they design their platforms and organize their search results.[KS1] 

What links these cases, investigations and accusations is the doctrine of leverage, i.e. the practice of exploiting one’s market power in one market in order to extend that power to an adjacent market. Importantly, leveraging is not a standalone theory of harm in antitrust law: it is more appropriately regarded as a category of conduct where competitive effects are felt in a neighboring market (think tying, refusals to deal, margin squeeze, etc.). Examples of such conduct in the platform context could include platform owners: promoting their own adjacent products/services in search result pages; bundling, tying or pre-installing their adjacent products/services with platform software code; shutting off access to Application Programming Interfaces or data to third parties to decrease the relative interoperability of their rivals’ products/services; or generally reducing the compatibility of third-party products/services with the platform as a means of distribution.

This post examines various proposals that have been put forward to solve the alleged prevalence of anticompetitive leveraging in digital platform markets, namely:

  1. blocking platform owners from also owning adjacent products/services;
  2. prohibiting “favoring” or “self-preferencing” behavior (i.e. enforcing a non-discrimination standard); and
  3. reversing the burden of proof so that dominant platform firms bear the burden of showing that such conduct does not harm competition.

Each of these proposals would abrogate the “consumer welfare standard” baked into antitrust law, which permits exclusionary behavior as long as it constitutes “competition on the merits”, i.e. that the conduct ultimately benefits consumers. As Judge Frank Easterbrook has mercilessly held, “injuries to rivals are byproducts of vigorous competition, and the antitrust laws are not balm for rivals’ wounds.” Antitrust law maintains a distinction between pro- and anti-competitive leveraging because consumers frequently benefit from the conduct outlined above. Conversely, implementing any of the above proposals would decrease or negate entirely the ability of platform owners to show that such conduct benefits consumers.

This post then examines whether protecting competition in adjacent markets is important enough to sacrifice the consumer benefits that flow from pro-competitive leveraging. Empirical criteria that have been present in comparable instances of such intervention, such as bottleneck power over distribution, evidence of widespread harm to competition in neighboring markets, static product boundaries, and a lack or unimportance of integrative efficiencies, are not satisfied in the current context. Absent some proof that they are, the consumer welfare framework under antitrust law should prevail without recourse to more intrusive intervention.

Proposals to regulate the activities of digital platform owners in neighboring markets

1.      Structural separation

Some scholars, such as Lina Khan, propose to implement “[s]tructural remedies and prophylactic bans [to] limit the ability of dominant platforms to enter certain distinct lines of business.” Senator Warren has echoed this proposal, calling forlarge tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform.” Under this proposal, Amazon would be unable to act both as an online marketplace and a seller on its own marketplace, Google would be unable to act as both a search engine and a mapping provider, and Apple and Google would be unable to act as both producers of mobile operating systems and apps that run on those operating systems. Meanwhile, Facebook would be unable to operate both its core social media platform and separate services, such as dating, local buy-and-sell, and other businesses like Instagram and WhatsApp. Khan posits that such separation is the primary method of “prevent[ing] leveraging and eliminat[ing] a core conflict of interest currently embedded in the business model of dominant platforms.”

A rule that prohibits entry into neighboring markets will certainly catch all instances of harmful leveraging, but it will inevitably also condemn all instances of leveraging that are in fact beneficial to consumers (see below for examples). Moreover, structural separation would also condemn efficiencies stemming from vertical integration that do not depend on leveraging behavior, e.g. elimination of double marginalization. As Bruce Owen sums up, such intervention “is not necessary, and may well reduce welfare by deterring efficient investments,” in circumstances where “[e]mpirical evidence that vertical integration or vertical restraints are harmful is weak, compared to evidence that vertical integration is beneficial.”

2.      Non-discrimination principles

Other scholars seek a prohibition on leveraging, i.e. a “non-discrimination” or “platform neutrality” standard whereby platform owners cannot treat third-party products/services differently to how they treat their own. Though framed as a regulatory regime operating in parallel to antitrust law, such regulation would have the effect of supplanting antitrust in favor of a standard that blocks all leveraging behavior, whether pro- or anti-competitive. For example, Apple and Google could still produce both apps and software platforms, but they would be unable to bundle them together, even if doing so improves the user experience (or benefits app developers).

This proposal also disregards the distinction between pro- and anti-competitive leveraging (albeit in a less intrusive manner than structural separation). It would, however, appear to maintain efficiencies stemming solely from vertical integration, as long as said benefits do not result in preferential treatment of the platform owner’s products/services.

3.      Reversing the burden of proof

Though not regulatory in nature, it is also worth including the proposal in the EC’s expert report on “competition in the digital era” of recalibrating the legal analysis of leveraging conduct by “err[ing] on the side of disallowing potentially anti-competitive conducts, and impos[ing] on the incumbent the burden of proof for showing the pro-competitiveness of its conduct.” Under this proposal, once a plaintiff establishes that leveraging conduct exists (without having to establish that it satisfies pre-existing legal criteria), the defendant would bear the burden of showing that its conduct did not have long-run anti-competitive effects or that the conduct had an overriding efficiency rationale.

As Dolmans and Pesch point out, proving that conduct does not have a long-term impact on competition may be nigh on impossible, as it involves proving a negative. This proposal would therefore bring non-discrimination in by the back door and return antitrust law to form-based rules that neglect the actual effects of conduct on competition or consumers. Moreover, making it unduly difficult for dominant firms to show that their conduct is in fact pro-competitive, despite any exclusionary effects, would similarly collapse an effects-based model for leveraging conduct into blanket non-discrimination. The report’s authors admit as much, citing for their proposal a report by the French telecoms regulator which advocates “a principle of ‘net neutrality’ for smartphones, tablets and voice assistants” (i.e. a non-discrimination standard).

Consumer vs. small-business welfare: which should prevail in digital markets?

Each of the above proposals would, to varying degrees, dissolve the distinctions between pro- and anti-competitive leveraging and (in the case of structural separation) pro- and anti-competitive vertical integration, significantly curtailing the ability of firms to legitimately out-compete their rivals in neighboring markets. All leveraging would be presumptively harmful to competition and, by extension, consumers.

The question then becomes whether we should ignore the incentive to innovate and compete in platform markets and turn our societal interests to competition within platforms. It has long been the case that “the primary purpose of the antitrust laws is to protect interbrand competition.” However, in certain circumstances, it can be preferable to shift the focus from inter- to intra-brand competition (often through legislation). Take, for example, must-carry provisions imposed on cable operators in the US or net neutrality regulation (repealed in the US, but prevailing in the EU). In circumstances such as these, society willingly foregoes benefits of continued innovation and competition in the inter-brand market because it has concluded, for one reason or another, that bolstering competition in the intra-brand market is more important. This can entail tolerating counterfactually higher prices or reduced quality as a byproduct of protecting interests deemed to be more important, such as maintaining a pluralistic downstream market. In line with the above proposals, there is a growing belief that such an inversion of the goals of competition policy is exactly what is needed in digital markets. This section examines the empirical criteria that one would expect to be verified before shifting the focal point of competition policy from inter-platform competition to intra-platform competition.

1.      Strategic bottleneck power over distribution

In past instances of “access regulation” or structural separation of vertically integrated firms, there have been concerns that the targeted firms had strategic “bottleneck” power over the distribution of some downstream product or service, i.e. the firms sat between a set of suppliers and consumers and, through the control of some vital input or method of distribution, controlled access between the two. Strategic bottleneck power was present in the must-carry provisions imposed on cable operators in the US, the non-discrimination principles enshrined in § 616 of the US Communications Act, and net neutrality regulation. The same applies to structural separation: when the District Court approved the consent decree structurally separating AT&T’s long-distance arm from its local operating companies, it was motivated by the fact that “the principal means by which AT&T has maintained monopoly power in telecommunications has been its control of the Operating Companies with their strategic bottleneck position.”

Do GAFA possess strategic bottleneck power? Take Google Search, for example. In the EC’s decision in Google Shopping, it found that referrals from Google Search accounted for a large proportion of traffic to rival comparison shopping websites and that traffic could not be effectively replaced by other sources. However, firms operating in neighboring markets have many more routes to consumers that flout discovery through a search engine. As John Temple Lang observes, they can access consumers through “direct navigation, specialized search services, social networks such as Facebook and Pinterest, partnerships with PC and mobile device markets, agreements with other publishers to refer traffic to each other, and so on.” Apple’s iOS and Google’s Android, on the other hand, compete against each other and thus neither firm, by definition, can possess the degree of strategic bottleneck power required to consider abandoning their respective incentives to innovate. As for Amazon: in 2019 it was estimated that Amazon accounted for 38% of all online sales in the US. This may seem like a staggering volume, but it in fact shows that distributors can – and do – bypass Amazon’s platform to reach consumers, with great success.

Insofar as GAFA possess strategic bottleneck power over particular categories of goods (i.e. in particular neighboring markets), this would not justify shifting the focus to intra-platform competition across all product categories. The market power element of traditional antitrust analyses serves to guard competition in these circumstances by carving a remedy around conduct that illegitimately hampers the ability of competitors in neighboring markets to compete.

2.      Widespread harm in adjacent markets

To ban platform owners from leveraging anti- and pro-competitively, one would expect there to be cogent evidence of harm to competition across a multitude of adjacent markets that depend on the platforms for access to consumers. However, as Feng Zhu and Qihong Liu note, there is a dearth of empirical evidence on the effects of platform owners’ entry into complementary markets. Even studies that support the proposition that such entry dampens or skews innovation incentives of firms in adjacent markets conclude that the welfare effects are ambiguous, and that consumers may actually be better off (see e.g. here and here). Other studies show that third-party producers can benefit from platform entry into adjacent markets (see e.g. here and here). It is therefore clear that this criterion, which should also be a prerequisite to imposing blanket regulation to control the behavior of platform owners, has not been satisfied.

3.      Discernible and static product boundaries

In prior cases of access regulation, the input that firms in neighboring markets have depended on to access consumers has been clearly distinguishable from their own products. Although antitrust literature commonly refers to “platforms” and “applications” as if these are perceptibly different products, the reality is much different: both platforms and their complementary applications are composed of individual components and, as Carl Shapiro notes, “the boundary between the ‘platform’ and services running on that platform can be fuzzy and can change over time.” Any attempt to freeze the definitional boundary of a platform would negate platform owners’ incentive to build upon and improve their platforms, to the detriment of consumers and (in the case of software platforms) app developers. If Apple were prevented from vertically integrating, what would iOS look like? Could it even have a voice-call function? Alternatively, under non-discrimination regulation, what would a new iOS device look like? Would it just be a blank screen where the user is then forced to choose between various alternatives?

The problem with proposing to separate platforms from adjacent products is that any platform component can theoretically be modularized and opened to competition from third-parties. Because integration of complementary components is an essential part of inter-platform competition, imposing the proposed interventions could destroy the very ecosystems that the competitors that critics seek to protect depend on, and prevent the next popular digital platform from emerging.

4.      Lack or unimportance of integrative efficiencies

Critics may counter that efficiencies stemming from leveraging are unimportant or non-existent, or do not depend on conduct that has exclusionary effects, and thus nothing is lost by shifting antitrust’s focus to the protection of competitors in adjacent markets. However, any iPhone user will testify to the consumer benefits flowing from technically integrating multiple platform components and features into a single package (e.g. voice-assistant technology and mapping functionality). In a similar vein, the UK CMA, in approving Google’s acquisition of mapping software company Waze, was prompted in part by the fact that “[i]ntegration of a map application into the operating system creates opportunities for operating system developers to use their own or affiliated services (for example search engines and social networks) to improve the experience of users.” Integrating Product A (the platform) and Product B (a component or the software code of an ancillary product/service) can facilitate the creation some new functionality or feature in the form of Product C that users value and, crucially, could not achieve by combining Products A and B themselves (from one or multiple firms). Another potential consumer benefit flowing from leveraging is a reduction in consumer search costs i.e. providing users with the functionality or end results that they seek more quickly and efficiently. Even though anti-competitive concerns can theoretically arise, it remains the case that, empirically, integration of software code is predominantly motivated by efficiency justifications and occurs in both competitive and concentrated markets.

Conclusion

Much of the impetus to enact the above proposals stems from the perception that antitrust law in its current form does not act quickly enough to restore competition in the market. Indeed, it can take over a decade for the dust to settle in big ticket antitrust cases, by which time antitrust remedies may be too little too late in those cases where the authorities get it right. To the extent that authorities can think of innovative ways to enforce existing standards more quickly and accurately, this would be met with widespread enthusiasm (but may be idealistic).

However, introducing more intrusive measures to protect competition in neighboring markets, and  undermining the consumer welfare standard that protects the ability of dominant firms to legitimately enter neighboring markets and compete on the merits, is not warranted. Intervention should remain targeted and evidence-based. If a complainant can adduce evidence that a platform owner is leveraging into a neighboring market and raising the complainant’s cost of doing business, and if the platform owner cannot show a pro-competitive justification for the behavior, antitrust law will intervene to restore competition under existing standards. For this, no regulatory intervention or other change to existing rules is necessary.

For a more detailed version of this post, see: Patrick F. Todd, Digital Platforms and the Leverage Problem, 98 Neb. L. Rev. 486 (2019).

Available at: https://digitalcommons.unl.edu/nlr/vol98/iss2/12.

[This post is the fourth in an ongoing symposium on “Should We Break Up Big Tech?“that features analysis and opinion from various perspectives.]

[This post is authored by Pallavi Guniganti, editor of Global Competition Review.]

Start with the assumption that there is a problem

The European Commission and Austria’s Federal Competition Authority are investigating Amazon over its use of Marketplace sellers’ data. US senator Elizabeth Warren has said that one reason to require “large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” is to prevent them from using data they obtain from third parties on the platform to benefit their own participation on the platform.

Amazon tweeted in response to Warren: “We don’t use individual sellers’ data to launch private label products.” However, an Amazon spokeswoman would not answer questions about whether it uses aggregated non-public data about sellers, or data from buyers; and whether any formal firewall prevents Amazon’s retail operation from accessing Marketplace data.

If the problem is solely that Amazon’s own retail operation can access data from the Marketplace, structurally breaking up the company and forbidding it and other platforms from participating on those platforms may be a far more extensive intervention than is needed. A targeted response such as a firewall could remedy the specific competitive harm.

Germany’s Federal Cartel Office implicitly recognised this with its Facebook decision, which did not demand the divestiture of every business beyond the core social network – the “Mark Zuckerberg Production” that began in 2004. Instead, the competition authority prohibited Facebook from conditioning the use of that social network on consent to the collection and combination of data from WhatsApp, Oculus, Masquerade, Instagram and any other sites or apps where Facebook might track them.

The decision does not limit data collection on Facebook itself. “It is taken into account that an advertising-funded social network generally needs to process a large amount of personal data,” the authority said. “However, the Bundeskartellamt holds that the efficiencies in a business model based on personalised advertising do not outweigh the interests of the users when it comes to processing data from sources outside of the social network.”

The Federal Cartel Office thus aims to wall off the data collected on Facebook from data that can be collected anywhere else. It ordered Facebook to present a road map for how it would implement these changes within four months of the February 2019 decision, but the time limit was suspended by the company’s emergency appeal to the Düsseldorf Higher Regional Court.

Federal Cartel Office president Andreas Mundt has described the kind of remedy he had ordered for Facebook as not exactly structural, but going in a “structural direction” that might work for other cases as well. Keeping the data apart is a way to “break up this market power” without literally breaking up the corporation, and the first step to an “internal divestiture”, he said.

Mundt claimed that this kind of remedy gets to “the core of the problem”: big internet companies being able to out-compete new entrants, because the former can obtain and process data even beyond what they collected on a single service that has attracted a large number of users.

He used terms like “silo” rather than “firewall”, but the essential idea is to protect competition by preventing the dissemination of certain information. Antitrust authorities worldwide have considered firewalls, particularly in vertical merger remedies, as a way to prevent the anticompetitive movement of data while still allowing for some efficiencies of business units being under the same corporate umbrella.

Notwithstanding Mundt’s reference to a “structural direction”, competition authorities including his own have traditionally classified firewalls as a behavioural or conduct remedy. They purport to solve a specific problem: the movement of information.

Other aspects of big companies that can give them an advantage – such as the use of profits from one part of a company to invest in another part, perhaps to undercut rivals on prices – would not be addressed by firewalls. They would more likely would require dividing up a company at the corporate level.

But if data are the central concern, then the way forward might be found in firewalls.

What do the enforcers say?

Germany

The Federal Cartel Office’s May 2017 guidance on merger remedies disfavours firewalls, stating that such obligations are “not suitable to remedy competitive harm” because they require continuous oversight. Employees of a corporation in almost any sector will commonly exchange information on a daily basis in almost every industry, making it “extremely difficult to identify, stop and prevent non-compliance with the firewall obligations”, the guidance states. In a footnote, it acknowledges that other, unspecified jurisdictions have regarded firewalls “as an effective remedy to remove competition concerns”.

UK

The UK’s Competition and Markets Authority takes a more optimistic view of the ability to keep a firewall in place, at least in the context of a vertical integration to prevent the use of “privileged information generated by competitors’ use of the merged company’s facilities or products”. In addition to setting up the company to restrict information flows, staff interactions and the sharing of services, physical premises and management, the CMA also requires the commitment of “significant resources to educating staff about the requirements of the measures and supporting the measures with disciplinary procedures and independent monitoring”. 

EU

The European Commission’s merger remedies notice is quite short. It does not mention firewalls or Chinese walls by name, simply noting that any non-structural remedy is problematic “due to the absence of effective monitoring of its implementation” by the commission or even other market participants. A 2011 European Commission submission to the Organisation for Economic Co-operation and Development was gloomier: “We have also found that firewalls are virtually impossible to monitor.”

US DOJ

The US antitrust agencies have been inconsistent in their views, and not on a consistent partisan basis. Under George W Bush, the Department of Justice’s antitrust division’s 2004 merger guidance said “a properly designed and enforced firewall” could prevent certain competition harms. But it also would require the DOJ and courts to expend “considerable time and effort” on monitoring, and “may frequently destroy the very efficiency that the merger was designed to generate. For these reasons, the use of firewalls in division decrees is the exception and not the rule.”

 Under Barack Obama, the Antitrust Division revised its guidance in 2011 to omit the most sceptical language about firewalls, replacing it with a single sentence about the need for effective monitoring. Under Donald Trump, the Antitrust Division has withdrawn the 2011 guidance, and the 2004 guidance is operative.

US FTC

At the Federal Trade Commission, on the other hand, firewalls had long been relatively uncontroversial among both Republicans and Democrats. For example, the commissioners unanimously agreed to a firewall remedy for PepsiCo’s and Coca-Cola’s separate 2010 acquisitions of bottlers and distributors that also dealt with rival a rival beverage maker, the Dr Pepper Snapple Group. (The FTC later emphasised the importance in those cases of obtaining industry expert monitors, who “have provided commission staff with invaluable insight and evaluation regarding each company’s compliance with the commission’s orders”.)

In 2017, the two commissioners who remained from the Obama administration both signed off on the Broadcom/Brocade merger based on a firewall – as did the European Commission, which also mandated interoperability commitments. And the Democratic commissioners appointed by President Trump voted with their Republican colleagues in 2018 to clear the Northrop Grumman/Orbital ATK deal subject to a behavioural remedy that included supply commitments and firewalls.

Several months later, however, those Democrats dissented from the FTC’s approval of Staples/Essendant, which the agency conditioned solely on a firewall between Essendant’s wholesale business and the Staples unit that handles corporate sales. While a firewall to prevent Staples from exploiting Essendant’s commercially-sensitive data about Staples’ rivals “will reduce the chance of misuse of data, it does not eliminate it,” Commissioner Rohit Chopra said. He emphasised the difficulty of policing oral communications, and said the FTC instead could have required Essendant to return its customers’ data. Commissioner Rebecca Kelly Slaughter said she shared Chopra’s “concerns about the efficacy of the firewall to remedy the information sharing harm”.

The majority defended firewalls’ effectiveness, noting that it had used them solve competition concerns in past vertical mergers, “and the integrity of those firewalls was robust.” The Republican commissioners cited the FTC’s review of the merger remedies it had imposed from 2006 to 2012, which concluded: “All vertical merger orders were judged successful.”

Republican commissioner Christine Wilson wrote separately about the importance of choosing “a remedy that is narrowly tailored to address the likely competitive harms without doing collateral damage.” Certain behavioural remedies for past vertical mergers had gone too far and even resulted in less competition, she said. “I have substantially fewer qualms about long-standing and less invasive tools, such as the ‘firewalls, fair dealing, and transparency provisions’ the Antitrust Division endorsed in the 2004 edition of its Policy Guide.”

Why firewalls don’t work, especially for big tech

Firewalls are designed to prevent the anticompetitive harm of information exchange, but whether they work depends on whether the companies and their employees behave themselves – and if they do not, on whether antitrust enforcers can know it and prove it. Deputy assistant attorney general Barry Nigro at the Antitrust Division has questioned the effectiveness of firewalls as a remedy for deals where the relevant business units are operationally close. The same problem may arise outside the merger context.

For example, Amazon’s investment fund for products to complement its Alexa virtual assistant could be seen as having the kind of firewall that is undercut by the practicalities of how a business operates. CNBC reported in September 2017 that “Alexa Fund representatives called a handful of its portfolio companies to say a clear ‘firewall’ exists between the Alexa Fund and Amazon’s product development teams.” The chief executive from Nucleus, one of those portfolio companies, had complained that Amazon’s Echo Show was a copycat of Nucleus’s product. While Amazon claimed that the Alexa Fund has “measures” to ensure “appropriate treatment” of confidential information, the companies said the process of obtaining the fund’s investment required them to work closely with Amazon’s product teams.

CNBC contrasted with Intel Capital – a division of the technology company that manages venture capital and investment – where a former managing director said he and his colleagues “tried to be extra careful not to let trade secrets flow across the firewall into its parent company”.

Firewalls are commonplace to corporate lawyers, who instill temporary blocks to prevent transmission of information in a variety of situations, such as during due diligence on a deal. This experience may lead such attorneys to put more faith in firewalls than enforcement advocates do.

Diana Moss, the president of the American Antitrust Institute, says that like other behavioral remedies, firewalls “don’t change any incentive to exercise market power”. In contrast, structural remedies eliminate that incentive by removing the part of the business that would make the exercise of market power profitable.

No internal monitoring or compliance ensures the firewall is respected, Moss says, unless a government consent order installs a monitor in a company to make sure the business units aren’t sharing information. This would be unlikely to occur, she says.

Moss’s 2011 white paper on behavioural merger remedies, co-authored with John Kwoka, reviews how well such remedies have worked. It notes that “information firewalls in Google-ITA and Comcast-NBCU clearly impede the joint operation and coordination of business divisions that would otherwise naturally occur.” 

Lina Khan’s 2019 Columbia Law Review article, “The Separation of Platforms and Commerce,” repeatedly cites Moss and Kwoka in the course of arguing that non-separation solutions such as firewalls do not work.

Khan concedes that information firewalls “in theory could help prevent information appropriation by dominant integrated firms.” But regulating the dissemination of information is especially difficult “in multibillion dollar markets built around the intricate collection, combination, and sale of data”, as companies in those markets “will have an even greater incentive to combine different sets of information”.

Why firewalls might work, especially for big tech

Yet neither Khan nor Moss points to an example of a firewall that clearly did not work. Khan writes: “Whether the [Google-ITA] information firewall was successful in preventing Google from accessing rivals’ business information is not publicly known. A year after the remedy expired, Google shut down” the application programming interface, through which ITA had provided its customisable flight search engine.

Even as enforcement advocates throw doubt on firewalls, enforcers keep requiring them. China’s Ministry of Commerce even used them to remedy a horizontal merger, in two stages of its conditions on Western Digital’s acquisition of Hitachi’s hard disk drive.

If German courts allow Andreas Mundt’s remedy for Facebook to go into effect, it will provide an example of just how effective a firewall can be on a platform. The decision requires Facebook to detail its technical plan to implement the obligation not to share data on users from its subsidiaries and its tracking on independent websites and apps.

A section of the “frequently asked questions” about the Federal Cartel Office’s Facebook case includes: “How can the Bundeskartellamt enforce the implementation of its decision?” The authority can impose fines for known non-compliance, but that assume it could detect violations of its order. Somewhat tentatively, the agency says it could carry out random monitoring, which is “possible in principle… as the actual flow of data eg from websites to Facebook can be monitored by analysing websites and their components or by recording signals.”

As perhaps befits the digital difference between Staples and Facebook, the German authority posits monitoring that would not be able to catch the kind of “oral communications” that Commissioner Chopra worried about when the US FTC cleared Staples’ acquisition of Essendant. But the use of such high-monitors could make firewalls even more appropriate as a remedy for platforms – which look to large data flows for a competitive advantage – than for old economy sales teams that could harm competition with just a few minutes of conversation.

Rather than a human monitor installed in a company to guard against firewall breaches, which Moss said was unlikely, software installed on employee computers and email systems might detect data flows between business units that should be walled off from each other. Breakups and firewalls are both longstanding remedies, but the latter may be more amenable to the kind of solutions that “big tech” itself has provided.

The world discovered something this past weekend that the world had already known: that what you say on the Internet stays on the Internet, spread intractably and untraceably through the tendrils of social media. I refer, of course, to the Cambridge Analytica/Facebook SNAFU (or just Situation Normal): the disclosure that Cambridge Analytica, a company used for election analytics by the Trump campaign, breached a contract with Facebook in order to unauthorizedly collect information on 50 million Facebook users. Since the news broke, Facebook’s stock is off by about 10 percent, Cambridge Analytica is almost certainly a doomed company, the FTC has started investigating both, private suits against Facebook are already being filed, the Europeans are investigating as well, and Cambridge Analytica is now being blamed for Brexit.

That is all fine and well, and we will be discussing this situation and its fallout for years to come. I want to write about a couple of other aspects of the story: the culpability of 270,000 Facebook users in disclosing the data of 50 million of their peers, and what this situation tells us about evergreen proposals to “open up the social graph” by making users’ social media content portable.

I Have Seen the Enemy and the Enemy is Us

Most discussion of Cambridge Analytica’s use of Facebook data has focused on the large number of user records Cambridge Analytica obtained access to – 50 million – and the fact that it obtained these records through some problematic means (and Cambridge Analytica pretty clearly breached contracts and acted deceptively to obtain these records). But one needs to dig a deeper to understand the mechanics of what actually happened. Once one does this, the story becomes both less remarkable and more interesting.

(For purposes of this discussion, I refer to Cambridge Analytica as the actor that obtained the records. It’s actually a little more complicated: Cambridge Analytica worked with an academic researcher to obtain these records. That researcher was given permission by Facebook to work with and obtain data on users for purposes relating to his research. But he exceeded that scope of authority, sharing the data that he collected with CA.)

The 50 million users’ records that Cambridge Analytica obtained access to were given to Cambridge Analytica by about 200,000 individual Facebook users. Those 270,000 users become involved with Cambridge Analytica by participating in an online quiz – one of those fun little throwaway quizzes that periodically get some attention on Facebook and other platforms. As part of taking that quiz, those 270,000 users agreed to grant Cambridge Analytica access to their profile information, including information available through their profile about their friends.

This general practice is reasonably well known. Any time a quiz or game like this has its moment on Facebook it is also accompanied by discussion of how the quiz or game is likely being used to harvest data about users. The terms of use of these quizzes and games almost always disclose that such information is being collected. More telling, any time a user posts a link to one of these quizzes or games, some friend will will invariably leave a comment warning about these terms of service and of these data harvesting practices.

There are two remarkable things about this. The first remarkable thing is that there is almost nothing remarkable about the fact that Cambridge Analytica obtained this information. A hundred such data harvesting efforts have preceded Cambridge Analytica; and a hundred more will follow it. The only remarkable things about the present story is that Cambridge Analytica was an election analytics firm working for Donald Trump – never mind that by all accounts the data collected proved to be of limited use generally in elections or that when Cambridge Analytica started working for the Trump campaign they were tasked with more mundane work that didn’t make use of this data.

More remarkable is that Cambridge Analytica didn’t really obtain data about 50 million individuals from Facebook, or from a Facebook quiz. Cambridge Analytica obtained this data from those 50 million individuals’ friends.

There are unquestionably important questions to be asked about the role of Facebook in giving users better control over, or ability to track uses of, their information. And there are questions about the use of contracts such as that between Facebook and Cambridge Analytica to control how data like this is handled. But this discussion will not be complete unless and until we also understand the roles and responsibilities of individual users in managing and respecting the privacy of their friends.

Fundamentally, we lack a clear and easy way to delineate privacy rights. If I share with my friends that I participated in a political rally, that I attended a concert, that I like certain activities, that I engage in certain illegal activities, what rights do I have to control how they subsequently share that information? The answer in the physical world, in the American tradition, is none – at least, unless I take affirmative steps to establish such a right prior to disclosing that information.

The answer is the same in the online world, as well – though platforms have substantial ability to alter this if they so desire. For instance, Facebook could change the design of its system to prohibit users from sharing information about their friends with third parties. (Indeed, this is something that most privacy advocates think social media platforms should do.) But such a “solution” to the delineation problem has its own problems. It assumes that the platform is the appropriate arbiter of privacy rights – a perhaps questionable assumption given platforms’ history of getting things wrong when it comes to privacy. More trenchant, it raises questions about users’ ability to delineate or allocate their privacy differently than allowed by the platforms, particularly where a given platform may not allow the delineation or allocation of rights that users prefer.

The Badness of the Open Graph Idea

One of the standard responses to concerns about how platforms may delineate and allow users to allocate their privacy interests is, on the one hand, that competition among platforms would promote desirable outcomes and that, on the other hand, the relatively limited and monopolistic competition that we see among firms like Facebook is one of the reasons that consumers today have relatively poor control over their information.

The nature of competition in markets such as these, including whether and how to promote more of it, is a perennial and difficult topic. The network effects inherent in markets like these suggest that promoting competition may in fact not improve consumer outcomes, for instance. Competition could push firms to less consumer-friendly privacy positions if that allows better monetization and competitive advantages. And the simple fact that Facebook has lost 10% of its value following the Cambridge Analytica news suggests that there are real market constraints on how Facebook operates.

But placing those issues to the side for now, the situation with Cambridge Analytica offers an important cautionary tale about one of the perennial proposals for how to promote competition between social media platforms: “opening up the social graph.” The basic idea of these proposals is to make it easier for users of these platforms to migrate between platforms or to use the features of different platforms through data portability and interoperability. Specific proposals have taken various forms over the years, but generally they would require firms like Facebook to either make users’ data exportable in a standardized form so that users could easily migrate it to other platforms or to adopt a standardized API that would allow other platforms to interoperate with data stored on the Facebook platform.

In other words, proposals to “open the social graph” are proposals to make it easier to export massive volumes of Facebook user data to third parties at efficient scale.

If there is one lesson from the past decade that is more trenchant than that delineation privacy rights is difficult it is that data security is even harder.

These last two points do not sum together well. The easier that Facebook makes it for its users’ data to be exported at scale, the easier Facebook makes it for its users’ data to be exfiltrated at scale. Despite its myriad problems, Cambridge Analytica at least was operating within a contractual framework with Facebook – it was a known party. Creating external API for exporting Facebook data makes it easier for unknown third-parties to anonymously obtain user information. Indeed, even if the API only works to allow trusted third parties to to obtain such information, the problem of keeping that data secured against subsequent exfiltration multiplies with each third party that is allowed access to that data.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

Microsoft wants you to believe that Google’s business practices stifle competition and harm consumers. Again.

The latest volley in its tiresome and ironic campaign to bludgeon Google with the same regulatory club once used against Microsoft itself is the company’s effort to foment an Android-related antitrust case in Europe.

In a recent polemicMicrosoft consultant (and business school professor) Ben Edelman denounces Google for requiring that, if device manufacturers want to pre-install key Google apps on Android devices, they “must install all the apps Google specifies, with the prominence Google requires, including setting these apps as defaults where Google instructs.” Edelman trots out gasp-worthy “secret” licensing agreements that he claims support his allegation (more on this later).

Similarly, a recent Wall Street Journal article, “Android’s ‘Open’ System Has Limits,” cites Edelman’s claim that limits on the licensing of Google’s proprietary apps mean that the Android operating system isn’t truly open source and comes with “strings attached.”

In fact, along with the Microsoft-funded trade organization FairSearch, Edelman has gone so far as to charge that this “tying” constitutes an antitrust violation. It is this claim that Microsoft and a network of proxies brought to the Commission when their efforts to manufacture a search-neutrality-based competition case against Google failed.

But before getting too caught up in the latest round of anti-Google hysteria, it’s worth noting that the Federal Trade Commission has already reviewed these claims. After a thorough, two-year inquiry, the FTC found the antitrust arguments against Google to be without merit. The South Korea Fair Trade Commission conducted its own two year investigation into Google’s Android business practices and dismissed the claims before it as meritless, as well.

Taking on Edelman and FairSearch with an exhaustive scholarly analysis, German law professor Torsten Koerber recently assessed the nature of competition among mobile operating systems and concluded that:

(T)he (EU) Fairsearch complaint ultimately does not aim to protect competition or consumers, as it pretends to. It rather strives to shelter Microsoft from competition by abusing competition law to attack Google’s business model and subvert competition.

It’s time to take a step back and consider the real issues at play.

In order to argue that Google has an iron grip on Android, Edelman’s analysis relies heavily on ”secret” Google licensing agreements — “MADAs” (Mobile Application Distribution Agreements) — trotted out with such fanfare one might think it was the first time two companies ever had a written contract (or tried to keep it confidential).

For Edelman, these agreements “suppress competition” with “no plausible pro-consumer benefits.” He writes, “I see no way to reconcile the MADA restrictions with [Android openness].”

Conveniently, however, Edelman neglects to cite to Section 2.6 of the MADA:

The parties will create an open environment for the Devices by making all Android Products and Android Application Programming Interfaces available and open on the Devices and will take no action to limit or restrict the Android platform.

Professor Korber’s analysis provides a straight-forward explanation of the relationship between Android and its OEM licensees:

Google offers Android to OEMs on a royalty-free basis. The licensees are free to download, distribute and even modify the Android code as they like. OEMs can create mobile devices that run “pure” Android…or they can apply their own user interfaces (IO) and thereby hide most of the underlying Android system (e.g. Samsung’s “TouchWiz” or HTC’s “Sense”). OEMs make ample use of this option.

The truth is that the Android operating system remains, as ever, definitively open source — but Android’s openness isn’t really what the fuss is about. In this case, the confusion (or obfuscation) stems from the casual confounding of Google Apps with the Android Operating System. As we’ll see, they aren’t the same thing.

Consider Amazon, which pre-loads no Google applications at all on its Kindle Fire and Fire Phone. Amazon’s version of Android uses Microsoft’s Bing as the default search engineNokia provides mapping services, and the app store is Amazon’s own.

Still, Microsoft’s apologists continue to claim that Android licensees can’t choose to opt out of Google’s applications suite — even though, according to a new report from ABI Research, 20 percent of smartphones shipped between May and July 2014 were based on a “Google-less” version of the Android OS. And that number is consistently increasing: Analysts predict that by 2015, 30 percent of Android phones won’t access Google Services.

It’s true that equipment manufacturers who choose the Android operating system have the option to include the suite of integrated, proprietary Google apps and services licensed (royalty-free) under the name Google Mobile Services (GMS). GMS includes Google Search, Maps, Calendar, YouTube and other apps that together define the “Google Android experience” that users know and love.

But Google Android is far from the only Android experience.

Even if a manufacturer chooses to license Google’s apps suite, Google’s terms are not exclusive. Handset makers are free to install competing applications, including other search engines, map applications or app stores.

Although Google requires that Google Search be made easily accessible (hardly a bad thing for consumers, as it is Google Search that finances the development and maintenance of all of the other (free) apps from which Google otherwise earns little to no revenue), OEMs and users alike can (and do) easily install and access other search engines in numerous ways. As Professor Korber notes:

The standard MADA does not entail any exclusivity for Google Search nor does it mandate a search default for the web browser.

Regardless, integrating key Google apps (like Google Search and YouTube) with other apps the company offers (like Gmail and Google+) is an antitrust problem only if it significantly forecloses competitors from these apps’ markets compared to a world without integrated Google apps, and without pro-competitive justification. Neither is true, despite the unsubstantiated claims to the contrary from Edelman, FairSearch and others.

Consumers and developers expect and demand consistency across devices so they know what they’re getting and don’t have to re-learn basic functions or program multiple versions of the same application. Indeed, Apple’s devices are popular in part because Apple’s closed iOS provides a predictable, seamless experience for users and developers.

But making Android competitive with its tightly controlled competitors requires special efforts from Google to maintain a uniform and consistent experience for users. Google has tried to achieve this uniformity by increasingly disentangling its apps from the operating system (the opposite of tying) and giving OEMs the option (but not the requirement) of licensing GMS — a “suite” of technically integrated Google applications (integrated with each other, not the OS).  Devices with these proprietary apps thus ensure that both consumers and developers know what they’re getting.

Unlike Android, Apple prohibits modifications of its operating system by downstream partners and users, and completely controls the pre-installation of apps on iOS devices. It deeply integrates applications into iOS, including Apple Maps, iTunes, Siri, Safari, its App Store and others. Microsoft has copied Apple’s model to a large degree, hard-coding its own applications (including Bing, Windows Store, Skype, Internet Explorer, Bing Maps and Office) into the Windows Phone operating system.

In the service of creating and maintaining a competitive platform, each of these closed OS’s bakes into its operating system significant limitations on which third-party apps can be installed and what they can (and can’t) do. For example, neither platform permits installation of a third-party app store, and neither can be significantly customized. Apple’s iOS also prohibits users from changing default applications — although the soon-to-be released iOS 8 appears to be somewhat more flexible than previous versions.

In addition to pre-installing a raft of their own apps and limiting installation of other apps, both Apple and Microsoft enable greater functionality for their own apps than they do the third-party apps they allow.

For example, Apple doesn’t make available for other browsers (like Google’s Chrome) all the JavaScript functionality that it does for Safari, and it requires other browsers to use iOS Webkit instead of their own web engines. As a result there are things that Chrome can’t do on iOS that Safari and only Safari can do, and Chrome itself is hamstrung in implementing its own software on iOS. This approach has led Mozilla to refuse to offer its popular Firefox browser for iOS devices (while it has no such reluctance about offering it on Android).

On Windows Phone, meanwhile, Bing is integrated into the OS and can’t be removed. Only in markets where Bing is not supported (and with Microsoft’s prior approval) can OEMs change the default search app from Bing. While it was once possible to change the default search engine that opens in Internet Explorer (although never from the hardware search button), the Windows 8.1 Hardware Development Notes, updated July 22, 2014, state:

By default, the only search provider included on the phone is Bing. The search provider used in the browser is always the same as the one launched by the hardware search button.

Both Apple iOS and Windows Phone tightly control the ability to use non-default apps to open intents sent from other apps and, in Windows especially, often these linkages can’t be changed.

As a result of these sorts of policies, maintaining the integrity — and thus the brand — of the platform is (relatively) easy for closed systems. While plenty of browsers are perfectly capable of answering an intent to open a web page, Windows Phone can better ensure a consistent and reliable experience by forcing Internet Explorer to handle the operation.

By comparison, Android, with or without Google Mobile Services, is dramatically more open, more flexible and customizable, and more amenable to third-party competition. Even the APIs that it uses to integrate its apps are open to all developers, ensuring that there is nothing that Google apps are able to do that non-Google apps with the same functionality are prevented from doing.

In other words, not just Gmail, but any email app is permitted to handle requests from any other app to send emails; not just Google Calendar but any calendar app is permitted to handle requests from any other app to accept invitations.

In no small part because of this openness and flexibility, current reports indicate that Android OS runs 85 percent of mobile devices worldwide. But it is OEM giant Samsung, not Google, that dominates the market, with a 65 percent share of all Android devices. Competition is rife, however, especially in emerging markets. In fact, according to one report, “Chinese and Indian vendors accounted for the majority of smartphone shipments for the first time with a 51% share” in 2Q 2014.

As he has not been in the past, Edelman is at least nominally circumspect in his unsubstantiated legal conclusions about Android’s anticompetitive effect:

Applicable antitrust law can be complicated: Some ties yield useful efficiencies, and not all ties reduce welfare.

Given Edelman’s connections to Microsoft and the realities of the market he is discussing, it could hardly be otherwise. If every integration were an antitrust violation, every element of every operating system — including Apple’s iOS as well as every variant of Microsoft’s Windows — should arguably be the subject of a government investigation.

In truth, Google has done nothing more than ensure that its own suite of apps functions on top of Android to maintain what Google sees as seamless interconnectivity, a high-quality experience for users, and consistency for application developers — while still allowing handset manufacturers room to innovate in a way that is impossible on other platforms. This is the very definition of pro-competitive, and ultimately this is what allows the platform as a whole to compete against its far more vertically integrated alternatives.

Which brings us back to Microsoft. On the conclusion of the FTC investigation in January 2013, a GigaOm exposé on the case had this to say:

Critics who say Google is too powerful have nagged the government for years to regulate the company’s search listings. But today the critics came up dry….

The biggest loser is Microsoft, which funded a long-running cloak-and-dagger lobbying campaign to convince the public and government that its arch-enemy had to be regulated….

The FTC is also a loser because it ran a high profile two-year investigation but came up dry.

EU regulators, take note.

This post is from Phil Weiser (Colorado)

It is trite to say that “we are all Schumpeterians now.”  When it comes to appreciating the importance of innovation and entrepreneurship, however, we are.  Schumpeter, unfortunately, did not leave a theory of innovation that lends itself to easy application to public policy prescriptions, as Brad De Long has explained so clearly.  By so clearly highlighting the role that antitrust law and intellectual property policy can play in spurring innovation, Michael Carrier has done the field a great service.  Indeed, Mike has written an impressive, ambitious, and important book.  But in a post like this, I come not to praise him, but to take pot shots from the peanut gallery.

My first pot shot is one that Mike knows is coming—and footnote 143 reveals as much.  On Mike’s view, the Trinko/Credit Suisse double header is, at worst, benign and, at best, on the money.  This view of Trinko leads Mike to predict that the Supreme Court will take an aggressive posture as to “pay for delay” pharmaceutical settlements that have developed as an unintended consequence of the Hatch-Waxman Act.  The problem with this view is that Mike overlooks the most disturbing aspect of Trinko—it made the judgment about the effectiveness of the regulatory regime (in that case as to the FCC) on a motion to dismiss.  Notably, in the AT&T antitrust litigation, this issue was a question of fact and not presumed based on the mere presence of a regulatory regime.  I have the same concern about Credit Suisse, which took a generous view of the SEC’s regulatory effectiveness not long after that agency (and the self regulatory organization upon which it relied) failed to unearth a cartel arrangement at the NASDAQ that was only revealed through antitrust litigation.  But I have written about this before, as footnote 143 recounts.

My second point is to underscore a point Mike makes in regard to the Microsoft antitrust litigation—whether the presence of intellectual property rights (IPRs) should justify a firm’s decision to withhold access to application programming interfaces or protocols necessary to facilitate interoperability.  I agree with his conclusion that IPRs should not displace antitrust oversight.  Again, to invoke U.S. v. AT&T, consider that, had the relevant interconnection issue in that case involved patented interfaces, it would have come out differently under the theory pressed by Microsoft.  Given that software patents are controversial to begin with, awarding the recipient of a patent on an application programming interface or communications protocol a get-out-jail free card is hard to justify.  That said, I would have liked to see Mike develop his view of Microsoft case.  He may well have resisted doing so out of concerns related to space, a lack of historical distance, or that he was not sure what type of verdict to pronounce on the decree.  At a minimum, I believe if safe to say that the case underscores the challenges of “regulating interoperability,” of which the IPR issues are only a relatively small part of the overall equation.

For a final point, let me close on the discussion of standard setting organizations (SSOs).  The role of SSOs is potentially very important and, until recently, they operated with a limited degree of awareness of the regulatory challenges they face as to, among other issues, the threat of patent holdout.  As I have explained elsewhere, there is a strong argument that SSOs should be given the type of latitude that Mike calls for in facilitating cooperation and managing the behavior of individual firms.  Where Mike could drill down deeper, however, is to evaluate the institutional challenges of how to enforce commitments by firms participating in standard setting organizations to restrict their collection of royalties to reasonable and non-discrimination (RAND) terms.  Most question-begging is whether the FTC’s Section 5 authority will ultimately prove to be an important tool in this regard (as used in the N-Data case).  In the wake of the Supreme Court’s denial in the Rambus case, there will be undoubtedly more pressure for the FTC to use this tool.

Mike’s book provides lots of fodder for discussion and will provide policymakers with a rich set of proposals to evaluate.  I look forward to hearing his voice on these issues over the years ahead.