Archives For data security

And if David finds out the data beneath his profile, you’ll start to be able to connect the dots in various ways with Facebook and Cambridge Analytica and Trump and Brexit and all these loosely-connected entities. Because you get to see inside the beast, you get to see inside the system.

This excerpt from the beginning of Netflix’s The Great Hack shows the goal of the documentary: to provide one easy explanation for Brexit and the election of Trump, two of the most surprising electoral outcomes in recent history.

Unfortunately, in attempting to tell a simple narrative, the documentary obscures more than it reveals about what actually happened in the Facebook-Cambridge Analytica data scandal. In the process, the film wildly overstates the significance of the scandal in either the 2016 US presidential election or the 2016 UK referendum on leaving the EU.

In this article, I will review the background of the case and show seven things the documentary gets wrong about the Facebook-Cambridge Analytica data scandal.

Background

In 2013, researchers published a paper showing that you could predict some personality traits — openness and extraversion — from an individual’s Facebook Likes. Cambridge Analytica wanted to use Facebook data to create a “psychographic” profile — i.e., personality type — of each voter and then micro-target them with political messages tailored to their personality type, ultimately with the hope of persuading them to vote for Cambridge Analytica’s client (or at least to not vote for the opposing candidate).

In this case, the psychographic profile is the person’s Big Five (or OCEAN) personality traits, which research has shown are relatively stable throughout our lives:

  1. Openness to new experiences
  2. Conscientiousness
  3. Extroversion
  4. Agreeableness
  5. Neuroticism

But how to get the Facebook data to create these profiles? A researcher at Cambridge University, Alex Kogan, created an app called thisismydigitallife, a short quiz for determining your personality type. Between 250,000 and 270,000 people were paid a small amount of money to take this quiz. 

Those who took the quiz shared some of their own Facebook data as well as their friends’ data (so long as the friends’ privacy settings allowed third-party app developers to access their data). 

This process captured data on “at least 30 million identifiable U.S. consumers”, according to the FTC. For context, even if we assume all 30 million were registered voters, that means the data could be used to create profiles for less than 20 percent of the relevant population. And though some may disagree with Facebook’s policy for sharing user data with third-party developers, collecting data in this manner was in compliance with Facebook’s terms of service at the time.

What crossed the line was what happened next. Kogan then sold that data to Cambridge Analytica, without the consent of the affected Facebook users and in express violation of Facebook’s prohibition on selling Facebook data between third and fourth parties. 

Upon learning of the sale, Facebook directed Alex Kogan and Cambridge Analytica to delete the data. But the social media company failed to notify users that their data had been misused or confirm via an independent audit that the data was actually deleted.

1. Cambridge Analytica was selling snake oil (no, you are not easily manipulated)

There’s a line in The Great Hack that sums up the opinion of the filmmakers and the subjects in their story: “There’s 2.1 billion people, each with their own reality. And once everybody has their own reality, it’s relatively easy to manipulate them.” According to the latest research from political science, this is completely bogus (and it’s the same marketing puffery that Cambridge Analytica would pitch to prospective clients).

The best evidence in this area comes from Joshua Kalla and David E. Broockman in a 2018 study published by American Political Science Review:

We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero. First, a systematic meta-analysis of 40 field experiments estimates an average effect of zero in general elections. Second, we present nine original field experiments that increase the statistical evidence in the literature about the persuasive effects of personal contact 10-fold. These experiments’ average effect is also zero.

In other words, a meta-analysis covering 49 high-quality field experiments found that in US general elections, advertising has zero effect on the outcome. (However, there is evidence “campaigns are able to have meaningful persuasive effects in primary and ballot measure campaigns, when partisan cues are not present.”)

But the relevant conclusion for the Cambridge Analytica scandal remains the same: in highly visible elections with a polarized electorate, it simply isn’t that easy to persuade voters to change their minds.

2. Micro-targeting political messages is overrated — people prefer general messages on shared beliefs

But maybe Cambridge Analytica’s micro-targeting strategy would result in above-average effects? The literature provides reason for skepticism here as well. Another paper by Eitan D. Hersh and Brian F. Schaffner in The Journal of Politics found that voters “rarely prefer targeted pandering to general messages” and “seem to prefer being solicited based on broad principles and collective beliefs.” It’s political tribalism all the way down. 

A field experiment with 56,000 Wisconsin voters in the 2008 US presidential election found that “persuasive appeals possibly reduced candidate support and almost certainly did not increase it,” suggesting that  “contact by a political campaign can engender a backlash.”

3. Big Five personality traits are not very useful for predicting political orientation

Or maybe there’s something special about targeting political messages based on a person’s Big Five personality traits? Again, there is little reason to believe this is the case. As Kris-Stella Trump mentions in an article for The Washington Post

The ‘Big 5’ personality traits … only predict about 5 percent of the variation in individuals’ political orientations. Even accurate personality data would only add very little useful information to a data set that includes people’s partisanship — which is what most campaigns already work with.

The best evidence we have on the importance of personality traits on decision-making comes from the marketing literature (n.b., it’s likely easier to influence consumer decisions than political decisions in today’s increasingly polarized electorate). Here too the evidence is weak:

In this successful study, researchers targeted ads, based on personality, to more than 1.5 million people; the result was about 100 additional purchases of beauty products than had they advertised without targeting.

More to the point, the Facebook data obtained by Cambridge Analytica couldn’t even accomplish the simple task of matching Facebook Likes to the Big Five personality traits. Here’s Cambridge University researcher Alex Kogan in Michael Lewis’s podcast episode about the scandal: 

We started asking the question of like, well, how often are we right? And so there’s five personality dimensions? And we said like, okay, for what percentage of people do we get all five personality categories correct? We found it was like 1%.

Eitan Hersh, an associate professor of political science at Tufts University, summed it up best: “Every claim about psychographics etc made by or about [Cambridge Analytica] is BS.

4. If Cambridge Analytica’s “weapons-grade communications techniques” were so powerful, then Ted Cruz would be president

The Great Hack:

Ted Cruz went from the lowest rated candidate in the primaries to being the last man standing before Trump got the nomination… Everyone said Ted Cruz had this amazing ground game, and now we know who came up with all of it. Joining me now, Alexander Nix, CEO of Cambridge Analytica, the company behind it all.

Reporting by Nicholas Confessore and Danny Hakim at The New York Times directly contradicts this framing on Cambridge Analytica’s role in the 2016 Republican presidential primary:

Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates.

Most significantly, the Cruz campaign stopped using Cambridge Analytica’s services in February 2016 due to disappointing results, as Kenneth P. Vogel and Darren Samuelsohn reported in Politico in June of that year:

Cruz’s data operation, which was seen as the class of the GOP primary field, was disappointed in Cambridge Analytica’s services and stopped using them before the Nevada GOP caucuses in late February, according to a former staffer for the Texas Republican.

“There’s this idea that there’s a magic sauce of personality targeting that can overcome any issue, and the fact is that’s just not the case,” said the former staffer, adding that Cambridge “doesn’t have a level of understanding or experience that allows them to target American voters.”

Vogel later tweeted that most firms hired Cambridge Analytica “because it was seen as a prerequisite for receiving $$$ from the MERCERS.” So it seems campaigns hired Cambridge Analytica not for its “weapons-grade communications techniques” but for the firm’s connections to billionaire Robert Mercer.

5. The Trump campaign phased out Cambridge Analytica data in favor of RNC data for the general election

Just as the Cruz campaign became disillusioned after working with Cambridge Analytica during the primary, so too did the Trump campaign during the general election, as Major Garrett reported for CBS News:

The crucial decision was made in late September or early October when Mr. Trump’s son-in-law Jared Kushner and Brad Parscale, Mr. Trump’s digital guru on the 2016 campaign, decided to utilize just the RNC data for the general election and used nothing from that point from Cambridge Analytica or any other data vendor. The Trump campaign had tested the RNC data, and it proved to be vastly more accurate than Cambridge Analytica’s, and when it was clear the RNC would be a willing partner, Mr. Trump’s campaign was able to rely solely on the RNC.

And of the little work Cambridge Analytica did complete for the Trump campaign, none involved “psychographics,” The New York Times reported:

Mr. Bannon at one point agreed to expand the company’s role, according to the aides, authorizing Cambridge to oversee a $5 million purchase of television ads. But after some of them appeared on cable channels in Washington, D.C. — hardly an election battleground — Cambridge’s involvement in television targeting ended.

Trump aides … said Cambridge had played a relatively modest role, providing personnel who worked alongside other analytics vendors on some early digital advertising and using conventional micro-targeting techniques. Later in the campaign, Cambridge also helped set up Mr. Trump’s polling operation and build turnout models used to guide the candidate’s spending and travel schedule. None of those efforts involved psychographics.

6. There is no evidence that Facebook data was used in the Brexit referendum

Last year, the UK’s data protection authority fined Facebook £500,000 — the maximum penalty allowed under the law — for violations related to the Cambridge Analytica data scandal. The fine was astonishing considering that the investigation of Cambridge Analytica’s licensed data derived from Facebook “found no evidence that UK citizens were among them,” according to the BBC. This detail demolishes the second central claim of The Great Hack, that data fraudulently acquired from Facebook users enabled Cambridge Analytica to manipulate the British people into voting for Brexit. On this basis, Facebook is currently appealing the fine.

7. The Great Hack wasn’t a “hack” at all

The title of the film is an odd choice given the facts of the case, as detailed in the background section of this article. A “hack” is generally understood as an unauthorized breach of a computer system or network by a malicious actor. People think of a genius black hat programmer who overcomes a company’s cybersecurity defenses to profit off stolen data. Alex Kogan, the Cambridge University researcher who acquired the Facebook data for Cambridge Analytica, was nothing of the sort. 

As Gus Hurwitz noted in an article last year, Kogan entered into a contract with Facebook and asked users for their permission to acquire their data by using the thisismydigitallife personality app. Arguably, if there was a breach of trust, it was when the app users chose to share their friends’ data, too. The editorial choice to call this a “hack” instead of “data collection” or “data scraping” is of a piece with the rest of the film; when given a choice between accuracy and sensationalism, the directors generally chose the latter.

Why does this narrative persist despite the facts of the case?

The takeaway from the documentary is that Cambridge Analytica hacked Facebook and subsequently undermined two democratic processes: the Brexit referendum and the 2016 US presidential election. The reason this narrative has stuck in the public consciousness is that it serves everyone’s self-interest (except, of course, Facebook’s).

It lets voters off the hook for what seem, to many, to be drastic mistakes (i.e., electing a reality TV star president and undoing the European project). If we were all manipulated into making the “wrong” decision, then the consequences can’t be our fault! 

This narrative also serves Cambridge Analytica, to a point. For a time, the political consultant liked being able to tell prospective clients that it was the mastermind behind two stunning political upsets. Lastly, journalists like the story because they compete with Facebook in the advertising market and view the tech giant as an existential threat.

There is no evidence for the film’s implicit assumption that, but for Cambridge Analytica’s use of Facebook data to target voters, Trump wouldn’t have been elected and the UK wouldn’t have voted to leave the EU. Despite its tone and ominous presentation style, The Great Hack fails to muster any support for its extreme claims. The truth is much more mundane: the Facebook-Cambridge Analytica data scandal was neither a “hack” nor was it “great” in historical importance.

The documentary ends with a question:

But the hardest part in all of this is that these wreckage sites and crippling divisions begin with the manipulation of one individual. Then another. And another. So, I can’t help but ask myself: Can I be manipulated? Can you?

No — but the directors of The Great Hack tried their best to do so.

Last year, real estate developer Alastair Mactaggart spent nearly $3.5 million to put a privacy law on the ballot in California’s November election. He then negotiated a deal with state lawmakers to withdraw the ballot initiative if they passed their own privacy bill. That law — the California Consumer Privacy Act (CCPA) — was enacted after only seven days of drafting and amending. CCPA will go into effect six months from today.

According to Mactaggart, it all began when he spoke with a Google engineer and was shocked to learn how much personal data the company collected. This revelation motivated him to find out exactly how much of his data Google had. Perplexingly, instead of using Google’s freely available transparency tools, Mactaggart decided to spend millions to pressure the state legislature into passing new privacy regulation.

The law has six consumer rights, including the right to know; the right of data portability; the right to deletion; the right to opt-out of data sales; the right to not be discriminated against as a user; and a private right of action for data breaches.

So, what are the law’s prospects when it goes into effect next year? Here are ten reasons why CCPA is going to be a dumpster fire.

1. CCPA compliance costs will be astronomical

“TrustArc commissioned a survey of the readiness of 250 firms serving California from a range of industries and company size in February 2019. It reports that 71 percent of the respondents expect to spend at least six figures in CCPA-related privacy compliance expenses in 2019 — and 19 percent expect to spend over $1 million. Notably, if CCPA were in effect today, 86 percent of firms would not be ready. An estimated half a million firms are liable under the CCPA, most of which are small- to medium-sized businesses. If all eligible firms paid only $100,000, the upfront cost would already be $50 billion. This is in addition to lost advertising revenue, which could total as much as $60 billion annually. (AEI / Roslyn Layton)

2. CCPA will be good for Facebook and Google (and bad for small ad networks)

“It’s as if the privacy activists labored to manufacture a fearsome cannon with which to subdue giants like Facebook and Google, loaded it with a scattershot set of legal restrictions, aimed it at the entire ads ecosystem, and fired it with much commotion. When the smoke cleared, the astonished activists found they’d hit only their small opponents, leaving the giants unharmed. Meanwhile, a grinning Facebook stared back at the activists and their mighty cannon, the weapon that they had slyly helped to design.” (Wired / Antonio García Martínez)

“Facebook and Google ultimately are not constrained as much by regulation as by users. The first-party relationship with users that allows these companies relative freedom under privacy laws comes with the burden of keeping those users engaged and returning to the app, despite privacy concerns.” (Wired / Antonio García Martínez)

3. CCPA will enable free-riding by users who opt out of data sharing

“[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, CCPA enables free riders—individuals that opt out but still expect the same services and price—and undercuts access to free content and services. Someone must pay for free services, and if individuals opt out of their end of the bargain—by allowing companies to use their data—they make others pay more, either directly or indirectly with lower quality services. CCPA tries to compensate for the drastic reduction in the effectiveness of online advertising, an important source of income for digital media companies, by forcing businesses to offer services even though they cannot effectively generate revenue from users.” (ITIF / Daniel Castro and Alan McQuinn)

4. CCPA is potentially unconstitutional as-written

“[T]he law potentially applies to any business throughout the globe that has/gets personal information about California residents the moment the business takes the first dollar from a California resident. Furthermore, the law applies to some corporate affiliates (parent, subsidiary, or commonly owned companies) of California businesses, even if those affiliates have no other ties to California. The law’s purported application to businesses not physically located in California raises potentially significant dormant Commerce Clause and other Constitutional problems.” (Eric Goldman)

5. GDPR compliance programs cannot be recycled for CCPA

“[C]ompanies cannot just expand the coverage of their EU GDPR compliance measures to residents of California. For example, the California Consumer Privacy Act:

  • Prescribes disclosures, communication channels (including toll-free phone numbers) and other concrete measures that are not required to comply with the EU GDPR.
  • Contains a broader definition of “personal data” and also covers information pertaining to households and devices.
  • Establishes broad rights for California residents to direct deletion of data, with differing exceptions than those available under GDPR.
  • Establishes broad rights to access personal data without certain exceptions available under GDPR (e.g., disclosures that would implicate the privacy interests of third parties).
  • Imposes more rigid restrictions on data sharing for commercial purposes.”

(IAPP / Lothar Determann)

6. CCPA will be a burden on small- and medium-sized businesses

“The law applies to businesses operating in California if they generate an annual gross revenue of $25 million or more, if they annually receive or share personal information of 50,000 California residents or more, or if they derive at least 50 percent of their annual revenue by “selling the personal information” of California residents. In effect, this means that businesses with websites that receive traffic from an average of 137 unique Californian IP addresses per day could be subject to the new rules.” (ITIF / Daniel Castro and Alan McQuinn)

CCPA “will apply to more than 500,000 U.S. companies, the vast majority of which are small- to medium-sized enterprises.” (IAPP / Rita Heimes and Sam Pfeifle)

7. CCPA’s definition of “personal information” is extremely over-inclusive

“CCPA likely includes gender information in the “personal information” definition because it is “capable of being associated with” a particular consumer when combined with other datasets. We can extend this logic to pretty much every type or class of data, all of which become re-identifiable when combined with enough other datasets. Thus, all data related to individuals (consumers or employees) in a business’ possession probably qualifies as “personal information.” (Eric Goldman)

“The definition of “personal information” includes “household” information, which is particularly problematic. A “household” includes the consumer and other co-habitants, which means that a person’s “personal information” oxymoronically includes information about other people. These people’s interests may diverge, such as with separating spouses, multiple generations under the same roof, and roommates. Thus, giving a consumer rights to access, delete, or port “household” information affects other people’s information, which may violate their expectations and create major security and privacy risks.” (Eric Goldman)

8. CCPA penalties might become a source for revenue generation

“According to the new Cal. Civ. Code §1798.150, companies that become victims of data theft or other data security breaches can be ordered in civil class action lawsuits to pay statutory damages between $100 to $750 per California resident and incident, or actual damages, whichever is greater, and any other relief a court deems proper, subject to an option of the California Attorney General’s Office to prosecute the company instead of allowing civil suits to be brought against it.” (IAPP / Lothar Determann)

“According to the new Cal. Civ. Code §1798.155, companies can be ordered in a civil action brought by the California Attorney General’s Office to pay penalties of up to $7,500 per intentional violation of any provision of the California Consumer Privacy Act, or, for unintentional violations, if the company fails to cure the unintentional violation within 30 days of notice, $2,500 per violation under Section 17206 of the California Business and Professions Code. Twenty percent of such penalties collected by the State of California shall be allocated to a new “Consumer Privacy Fund” to fund enforcement.” (IAPP / Lothar Determann)

“[T]he Attorney General, through its support of SB 561, is seeking to remove this provision, known as a “30-day cure,” arguing that it would be able to secure more civil penalties and thus increase enforcement. Specifically, the Attorney General has said it needs to raise $57.5 million in civil penalties to cover the cost of CCPA enforcement.”  (ITIF / Daniel Castro and Alan McQuinn)

9. CCPA is inconsistent with existing privacy laws

“California has led the United States and often the world in codifying privacy protections, enacting the first laws requiring notification of data security breaches (2002) and website privacy policies (2004). In the operative section of the new law, however, the California Consumer Privacy Act’s drafters did not address any overlap or inconsistencies between the new law and any of California’s existing privacy laws, perhaps due to the rushed legislative process, perhaps due to limitations on the ability to negotiate with the proponents of the Initiative. Instead, the new Cal. Civ. Code §1798.175 prescribes that in case of any conflicts with California laws, the law that affords the greatest privacy protections shall control.” (IAPP / Lothar Determann)

10. CCPA will need to be amended, creating uncertainty for businesses

As of now, a dozen bills amending CCPA have passed the California Assembly and continue to wind their way through the legislative process. California lawmakers have until September 13th to make any final changes to the law before it goes into effect. In the meantime, businesses have to begin compliance preparations under a cloud of uncertainty about what the says today — or what it might even say in the future.

Source: KC Green

GDPR is officially one year old. How have the first 12 months gone? As you can see from the mix of data and anecdotes below, it appears that compliance costs have been astronomical; individual “data rights” have led to unintended consequences; “privacy protection” seems to have undermined market competition; and there have been large unseen — but not unmeasurable! — costs in forgone startup investment. So, all-in-all, about what we expected.

GDPR cases and fines

Here is the latest data on cases and fines released by the European Data Protection Board:

  • €55,955,871 in fines
    • €50 million of which was a single fine on Google
  • 281,088 total cases
    • 144,376 complaints
    • 89,271 data breach notifications
    • 47,441 other
  • 37.0% ongoing
  • 62.9% closed
  • 0.1% appealed

Unintended consequences of new data privacy rights

GDPR can be thought of as a privacy “bill of rights.” Many of these new rights have come with unintended consequences. If your account gets hacked, the hacker can use the right of access to get all of your data. The right to be forgotten is in conflict with the public’s right to know a bad actor’s history (and many of them are using the right to memory hole their misdeeds). The right to data portability creates another attack vector for hackers to exploit. And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.

Article 15: Right of access

  • “Amazon sent 1,700 Alexa voice recordings to the wrong user following data request” [The Verge / Nick Statt]
  • “Today I discovered an unfortunate consequence of GDPR: once someone hacks into your account, they can request-—and potentially access—all of your data. Whoever hacked into my Spotify account got all of my streaming, song, etc. history simply by requesting it.” [Jean Yang]

Article 17: Right to be forgotten

  • “Since 2016, newspapers in Belgium and Italy have removed articles from their archives under [GDPR]. Google was also ordered last year to stop listing some search results, including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.” [NYT / Adam Satariano]
  • “French scam artist Michael Francois Bujaldon is using the GDPR to attempt to remove traces of his United States District Court case from the internet. He has already succeeded in compelling PacerMonitor to remove his case.” [PlainSite]
  • “In the last 5 days, we’ve had requests under GDPR to delete three separate articles … all about US lawsuits concerning scams committed by Europeans. That ‘right to be forgotten’ is working out just great, huh guys?” [Mike Masnick]

Article 20: Right to data portability

  • Data portability increases the attack surface for bad actors to exploit. In a sense, the Cambridge Analytica scandal was a case of too much data portability.
  • “The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?” [Stratechery / Ben Thompson]
  • “Presumably data portability would be imposed on Facebook’s competitors and potential competitors as well.  That would mean all future competing firms would have to slot their products into a Facebook-compatible template.  Let’s say that 17 years from now someone has a virtual reality social network innovation: does it have to be “exportable” into Facebook and other competitors?  It’s hard to think of any better way to stifle innovation.” [Marginal Revolution / Tyler Cowen]

Article 21: Right to opt out of data processing

  • “[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, these frameworks enable free riders—individuals that opt out but still expect the same services and price—and undercut access to free content and services.” [ITIF / Alan McQuinn and Daniel Castro]

Compliance costs are astronomical

  • Prior to GDPR going into effect, “PwC surveyed 200 companies with more than 500 employees and found that 68% planned on spending between $1 and $10 million to meet the regulation’s requirements. Another 9% planned to spend more than $10 million. With over 19,000 U.S. firms of this size, total GDPR compliance costs for this group could reach $150 billion.” [Fortune / Daniel Castro and Michael McLaughlin]
  • “[T]he International Association of Privacy Professionals (IAPP) estimates 500,000 European organizations have registered data protection officers (DPOs) within the first year of the General Data Protection Regulation (GDPR). According to a recent IAPP salary survey, the average DPO’s salary in Europe is $88,000.” [IAPP]
  • As of March 20, 2019, 1,129 US news sites are still unavailable in the EU due to GDPR. [Joseph O’Connor]
  • Microsoft had 1,600 engineers working on GDPR compliance. [Microsoft]
  • During a Senate hearing, Keith Enright, Google’s chief privacy officer, estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules. [Quartz / Ashley Rodriguez]
    • However, French authorities ultimately decided Google’s compliance efforts were insufficient: “France fines Google nearly $57 million for first major violation of new European privacy regime” [Washington Post / Tony Romm]
  • “About 220,000 name tags will be removed in Vienna by the end of [2018], the city’s housing authority said. Officials fear that they could otherwise be fined up to $23 million, or about $1,150 per name.” [Washington Post / Rick Noack]
    UPDATE: Wolfie Christl pointed out on Twitter that the order to remove name tags was rescinded after only 11,000 name tags were removed due to public backlash and what Housing Councilor Kathrin Gaal said were “different legal opinions on the subject.”

Tradeoff between privacy regulations and market competition

“On the big guys increasing market share? I don’t believe [the law] will have such a consequence.” Věra Jourová, the European Commissioner for Justice, Consumers and Gender Equality [WSJ / Sam Schechner and Nick Kostov]

“Mentioned GDPR to the head of a European media company. ‘Gift to Google and Facebook, enormous regulatory own-goal.'” [Benedict Evans]

Source: WSJ
  • “Hundreds of companies compete to place ads on webpages or collect data on their users, led by Google, Facebook and their subsidiaries. The European Union’s General Data Protection Regulation, which took effect in May, imposes stiff requirements on such firms and the websites who use them. After the rule took effect in May, Google’s tracking software appeared on slightly more websites, Facebook’s on 7% fewer, while the smallest companies suffered a 32% drop, according to Ghostery, which develops privacy-enhancing web technology.” [WSJ / Greg Ip]
  • Havas SA, one of the world’s largest buyers of ads, says it observed a low double-digit percentage increase in advertisers’ spending through DBM on Google’s own ad exchange on the first day the law went into effect, according to Hossein Houssaini, Havas’s global head of programmatic solutions. On the selling side, companies that help publishers sell ad inventory have seen declines in bids coming through their platforms from Google. Paris-based Smart says it has seen a roughly 50% drop. [WSJ / Nick Kostov and Sam Schechner]
  • “The consequence was that just hours after the law’s enforcement, numerous independent ad exchanges and other vendors watched their ad demand volumes drop between 20 and 40 percent. But with agencies free to still buy demand on Google’s marketplace, demand on AdX spiked. The fact that Google’s compliance strategy has ended up hurting its competitors and redirecting higher demand back to its own marketplace, where it can guarantee it has user consent, has unsettled publishers and ad tech vendors.” [Digiday / Jessica Davies]

Unseen costs of forgone investment & research

  • Startups: One study estimated that venture capital invested in EU startups fell by as much as 50 percent due to GDPR implementation: “Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR … We use our results to provide a back-of-the-envelope calculation of a range of job losses that may be incurred by these ventures, which we estimate to be between 3,604 to 29,819 jobs.” [NBER / Jian Jia, Ginger Zhe Jin, and Liad Wagman]
  • Mergers and acquisitions: “55% of respondents said they had worked on deals that fell apart because of concerns about a target company’s data protection policies and compliance with GDPR” [WSJ / Nina Trentmann]
  • Scientific research: “[B]iomedical researchers fear that the EU’s new General Data Protection Regulation (GDPR) will make it harder to share information across borders or outside their original research context.” [Politico / Sarah Wheaton]

GDPR graveyard

Small and medium-sized businesses (SMBs) have left the EU market in droves (or shut down entirely). Here is a partial list:

Blockchain & P2P Services

  • CoinTouch, peer-to-peer cryptocurrency exchange
  • FamilyTreeDNA, free and public genetic tools
    • Mitosearch
    • Ysearch
  • Monal, XMPP chat app
  • Parity, know-your-customer service for initial coin offerings (ICOs)
  • Seznam, social network for students
  • StreetLend, tool sharing platform for neighbors

Marketing

  • Drawbridge, cross-device identity service
  • Klout, social reputation service by Lithium
  • Unroll.me, inbox management app
  • Verve, mobile programmatic advertising

Video Games

Other

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

The FTC will hold an “Informational Injury Workshop” in December “to examine consumer injury in the context of privacy and data security.” Defining the scope of cognizable harm that may result from the unauthorized use or third-party hacking of consumer information is, to be sure, a crucial inquiry, particularly as ever-more information is stored digitally. But the Commission — rightly — is aiming at more than mere definition. As it notes, the ultimate objective of the workshop is to address questions like:

How do businesses evaluate the benefits, costs, and risks of collecting and using information in light of potential injuries? How do they make tradeoffs? How do they assess the risks of different kinds of data breach? What market and legal incentives do they face, and how do these incentives affect their decisions?

How do consumers perceive and evaluate the benefits, costs, and risks of sharing information in light of potential injuries? What obstacles do they face in conducting such an evaluation? How do they evaluate tradeoffs?

Understanding how businesses and consumers assess the risk and cost “when information about [consumers] is misused,” and how they conform their conduct to that risk, entails understanding not only the scope of the potential harm, but also the extent to which conduct affects the risk of harm. This, in turn, requires an understanding of the FTC’s approach to evaluating liability under Section 5 of the FTC Act.

The problem, as we discuss in comments submitted by the International Center for Law & Economics to the FTC for the workshop, is that the Commission’s current approach troublingly mixes the required separate analyses of risk and harm, with little elucidation of either.

The core of the problem arises from the Commission’s reliance on what it calls a “reasonableness” standard for its evaluation of data security. By its nature, a standard that assigns liability for only unreasonable conduct should incorporate concepts resembling those of a common law negligence analysis — e.g., establishing a standard of due care, determining causation, evaluating the costs of and benefits of conduct that would mitigate the risk of harm, etc. Unfortunately, the Commission’s approach to reasonableness diverges from the rigor of a negligence analysis. In fact, as it has developed, it operates more like a strict liability regime in which largely inscrutable prosecutorial discretion determines which conduct, which firms, and which outcomes will give rise to liability.

Most troublingly, coupled with the Commission’s untenably lax (read: virtually nonexistent) evidentiary standards, the extremely liberal notion of causation embodied in its “reasonableness” approach means that the mere storage of personal information, even absent any data breach, could amount to an unfair practice under the Act — clearly not a “reasonable” result.

The notion that a breach itself can constitute injury will, we hope, be taken up during the workshop. But even if injury is limited to a particular type of breach — say, one in which sensitive, personal information is exposed to a wide swath of people — unless the Commission’s definition of what it means for conduct to be “likely to cause” harm is fixed, it will virtually always be the case that storage of personal information could conceivably lead to the kind of breach that constitutes injury. In other words, better defining the scope of injury does little to cabin the scope of the agency’s discretion when conduct creating any risk of that injury is actionable.

Our comments elaborate on these issues, as well as providing our thoughts on how the subjective nature of informational injuries can fit into Section 5, with a particular focus on the problem of assessing informational injury given evolving social context, and the need for appropriately assessing benefits in any cost-benefit analysis of conduct leading to informational injury.

ICLE’s full comments are available here.

The comments draw upon our article, When ‘Reasonable’ Isn’t: The FTC’s Standard-Less Data Security Standard, forthcoming in the Journal of Law, Economics and Policy.

I’ll be participating in two excellent antitrust/consumer protection events next week in DC, both of which may be of interest to our readers:

5th Annual Public Policy Conference on the Law & Economics of Privacy and Data Security

hosted by the GMU Law & Economics Center’s Program on Economics & Privacy, in partnership with the Future of Privacy Forum, and the Journal of Law, Economics & Policy.

Conference Description:

Data flows are central to an increasingly large share of the economy. A wide array of products and business models—from the sharing economy and artificial intelligence to autonomous vehicles and embedded medical devices—rely on personal data. Consequently, privacy regulation leaves a large economic footprint. As with any regulatory enterprise, the key to sound data policy is striking a balance between competing interests and norms that leaves consumers better off; finding an approach that addresses privacy concerns, but also supports the benefits of technology is an increasingly complex challenge. Not only is technology continuously advancing, but individual attitudes, expectations, and participation vary greatly. New ideas and approaches to privacy must be identified and developed at the same pace and with the same focus as the technologies they address.

This year’s symposium will include panels on Unfairness under Section 5: Unpacking “Substantial Injury”, Conceptualizing the Benefits and Costs from Data Flows, and The Law and Economics of Data Security.

I will be presenting a draft paper, co-authored with Kristian Stout, on the FTC’s reasonableness standard in data security cases following the Commission decision in LabMD, entitled, When “Reasonable” Isn’t: The FTC’s Standard-less Data Security Standard.

Conference Details:

  • Thursday, June 8, 2017
  • 8:00 am to 3:40 pm
  • at George Mason University, Founders Hall (next door to the Law School)
    • 3351 Fairfax Drive, Arlington, VA 22201

Register here

View the full agenda here

 

The State of Antitrust Enforcement

hosted by the Federalist Society.

Panel Description:

Antitrust policy during much of the Obama Administration was a continuation of the Bush Administration’s minimal involvement in the market. However, at the end of President Obama’s term, there was a significant pivot to investigations and blocks of high profile mergers such as Halliburton-Baker Hughes, Comcast-Time Warner Cable, Staples-Office Depot, Sysco-US Foods, and Aetna-Humana and Anthem-Cigna. How will or should the new Administration analyze proposed mergers, including certain high profile deals like Walgreens-Rite Aid, AT&T-Time Warner, Inc., and DraftKings-FanDuel?

Join us for a lively luncheon panel discussion that will cover these topics and the anticipated future of antitrust enforcement.

Speakers:

  • Albert A. Foer, Founder and Senior Fellow, American Antitrust Institute
  • Profesor Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Honorable Joshua D. Wright, Professor of Law, George Mason University School of Law
  • Moderator: Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law and President, Cass & Associates, PC

Panel Details:

  • Friday, June 09, 2017
  • 12:00 pm to 2:00 pm
  • at the National Press Club, MWL Conference Rooms
    • 529 14th Street, NW, Washington, DC 20045

Register here

Hope to see everyone at both events!

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

On August 24, the Third Circuit issued its much anticipated decision in FTC v. Wyndham Worldwide Corp., holding that the U.S. Federal Trade Commission (FTC) has authority to challenge cybersecurity practices under its statutory “unfairness” authority.  This case brings into focus both legal questions regarding the scope of the FTC’s cybersecurity authority and policy questions regarding the manner in which that authority should be exercised.

1.     Wyndham: An Overview

Rather than “reinventing the wheel,” let me begin by quoting at length from Gus Hurwitz’s excellent summary of the relevant considerations in this case:

In 2012, the FTC sued Wyndham Worldwide, the parent company and franchisor of the Wyndham brand of hotels, arguing that its allegedly lax data security practices allowed hackers to repeatedly break into its franchiseescomputer systems. The FTC argued that these breaches resulted in harm to consumers totaling over $10 million in fraudulent activity. The FTC brought its case under Section 5 of the FTC Act, which declares “unfair and deceptive acts and practices” to be illegal. The FTCs basic arguments are that it was, first, deceptive for Wyndham – which had a privacy policy indicating how it handled customer data – to assure consumers that the company took industry-standard security measures to protect customer data; and second, independent of any affirmative assurances that customer data was safe, it was unfair for Wyndham to handle customer data in an insecure way.

This case arose in the broader context of the FTCs efforts to establish a general law of data security. Over the past two decades, the FTC has begun aggressively pursuing data security claims against companies that suffer data breaches. Almost all of these cases have settled out of court, subject to consent agreements with the FTC. The Commission points to these agreements, along with other public documents that it views as guidance, as creating a “common law of data security.” Responding to a request from the Third Circuit for supplemental briefing on this question, the FTC asserted in no uncertain terms its view that “the FTC has acted under its procedures to establish that unreasonable data security practices that harm consumers are indeed unfair within the meaning of Section 5.”

Shortly after the FTCs case was filed, Wyndham asked the District Court judge to dismiss the case, arguing that the FTC didnt have authority under Section 5 to take action against a firm that had suffered a criminal theft of its data. The judge denied this motion. But, recognizing the importance and uncertainty of part of the issue – the scope of the FTCs “unfairness” authority – she allowed Wyndham to immediately appeal that part of her decision. The Third Circuit agreed to hear the appeal, framing the question as whether the FTC has authority to regulate cybersecurity under its Section 5 “unfairness” authority, and, if so, whether the FTCs application of that authority satisfied Constitutional Due Process requirements. Oral arguments were heard last March, and the courts opinion was issued on Monday [August 24]. . . . 

In its opinion, the Court of Appeals rejects Wyndhams arguments that its data security practices cannot be unfair. As such, the case will be allowed to proceed to determine whether Wyndhams security practices were in fact “unfair” under Section 5. . . .

 Recall the setting in which this case arose: the FTC has spent more than a decade trying to create a general law of data security. The reason this case was – and still is – important is because Wyndham was challenging the FTCs general law of data security.

But the court, in the second part of its opinion, accepts Wyndhams arguments that the FTC has not developed such a law. This is central to the courts opinion, because different standards apply to interpretations of laws that courts have developed as opposed to those that agencies have developed. The court outlines these standards, explaining that “a higher standard of fair notice applies [in the context of agency rules] than in the typical civil statutory interpretation case because agencies engage in interpretation differently than courts.”

The court goes on to find that Wyndham had sufficient notice of the requirements of Section 5 under the standard that applies to judicial interpretations of statutes. And it expressly notes that, should the district court decide that the higher standard applies – that is, if the court agrees to apply the general law of data security that the FTC has tried to develop in recent years – the court will need to reevaluate whether the FTCs rules meet Constitutional muster. That review would be subject to the tougher standard applied to agency interpretations of statutes.

Stressing the Third Circuit’s statement that the FTC had failed to explain how it had “informed the public that it needs to look at [FTC] complaints and consent decrees for guidance[,]” Gus concludes that the Third Circuit’s opinion indicates that  the FTC “has lost its war to create a general law of data security” based merely on its prior actions.  According to Gus:

The takeaway, it seems, is that the FTC does have the power to take action against bad security practices, but if it wants to do so in a way that shapes industry norms and legal standards – if it wants to develop a general law of data security – a patchwork of consent decrees and informal statements is insufficient to the task. Rather, it must either pursue its cases to a decision on the merits or develop legally binding rules through . . . rulemaking procedures.

2.     Wyndham’s Implications for the Scope of the FTC’s Legal Authority

I highly respect Gus’s trenchant legal and policy analysis of Wyndham.  I believe, however, that it may somewhat understate the strength of the FTC’s legal position going forward.  The Third Circuit also explained (citations omitted):

Wyndham is only entitled to notice of the meaning of the statute and not to the agencys interpretation of the statute. . . . 

[Furthermore,] Wyndham is entitled to a relatively low level of statutory notice for several reasons. Subsection 45(a) [of the FTC Act, which states “unfair acts or practices” are illegal] does not implicate any constitutional rights here. . . .  It is a civil rather than criminal statute. . . .  And statutes regulating economic activity receive a “less strict” test because their “subject matter is often more narrow, and because businesses, which face economic demands to plan behavior carefully, can be expected to consult relevant legislation in advance of action.” . . . .  In this context, the relevant legal rule is not “so vague as to be ‘no rule or standard at all.’” . . . .  Subsection 45(n) [of the FTC Act, as a prerequisite to a finding of unfairness,] asks whether “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” While far from precise, this standard informs parties that the relevant inquiry here is a cost-benefit analysis, . . . that considers a number of relevant factors, including the probability and expected size of reasonably unavoidable harms to consumers given a certain level of cybersecurity and the costs to consumers that would arise from investment in stronger cybersecurity. We acknowledge there will be borderline cases where it is unclear if a particular companys conduct falls below the requisite legal threshold. But under a due process analysis a company is not entitled to such precision as would eliminate all close calls. . . .  Fair notice is satisfied here as long as the company can reasonably foresee that a court could construe its conduct as falling within the meaning of the statute. . . . 

[In addition, in 2007, the FTC issued a guidebook on business data security, which] could certainly have helped Wyndham determine in advance that its conduct might not survive the [§ 45(n)] cost-benefit analysis.  Before the [cybersecurity] attacks [on Wyndhams network], the FTC also filed complaints and entered into consent decrees in administrative cases raising unfairness claims based on inadequate corporate cybersecurity. . . .  That the FTC Commissioners – who must vote on whether to issue a complaint . . . – believe that alleged cybersecurity practices fail the cost-benefit analysis of § 45(n) certainly helps companies with similar practices apprehend the possibility that their cybersecurity could fail as well.  

In my view, a fair reading of this Third Circuit language is that:  (1) courts should read key provisions of the FTC Act to encompass cybersecurity practices that the FTC finds are not cost-beneficial; and (2) the FTC’s history of guidance and consent decrees regarding cybersecurity give sufficient notice to companies regarding the nature of cybersecurity plans that the FTC may challenge.   Based on that reading, I conclude that even if a court adopts a very exacting standard for reviewing the FTC’s interpretation of its own statute, the FTC is likely to succeed in future case-specific cybersecurity challenges, assuming that it builds a solid factual record that appears to meet cost-benefit analysis.  Whether other Circuits would agree with the Third Circuit’s analysis is, of course, open to debate (I myself suspect that they probably would).

3.     Sound Policy in Light of Wyndham

Apart from our slightly different “takes” on the legal implications of the Third Circuit’s Wyndham decision, I fully agree with Gus that, as a policy matter, the FTC’s “patchwork of consent decrees and informal statements is insufficient to the task” of building a general law of cybersecurity.  In a 2014 Heritage Foundation Legal Memorandum on the FTC and cybersecurity, I stated:

The FTCs regulation of business systems by decree threatens to stifle innovation by companies related to data security and to impose costs that will be passed on in part to consumers. Missing from the consent decree calculus is the question of whether the benefits in diminished data security breaches justify those costs—a question that should be at the heart of unfairness analysis. There are no indications that the FTC has even asked this question in fashioning data security consents, let alone made case-specific cost-benefit analyses. This is troubling.

Equally troubling is the that the FTC apparently expects businesses to divine from a large number of ad hoc, fact-specific consent decrees with varying provisions what they must do vis-à-vis data security to avoid possible FTC targeting. The uncertainty engendered by sole reliance on complicated consent decrees for guidance (in the absence of formal agency guidelines or litigated court decisions) imposes additional burdens on business planners. . . .

[D]ata security investigations that are not tailored to the size and capacity of the firm may impose competitive disadvantages on smaller rivals in industries in which data protection issues are paramount.

Moreover, it may be in the interest of very large firms to support costlier and more intrusive FTC data security initiatives, knowing that they can better afford the adoption of prohibitively costly data security protocols than their smaller competitors can. This is an example of a “raising rivalscosts” strategy, which reduces competition by crippling or eliminating rivals.

Given these and related concerns (including the failure of existing FTC reports to give appropriate guidance), I concluded, among other recommendations, that:

[T]he FTC should issue data security guidelines that clarify its enforcement policy regarding data security breaches pursuant to Section 5 of the Federal Trade Commission Act. Such guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses. They should studiously avoid dictating to industry the data security principles that firms should adopt. . . .

[T]he FTC should [also] employ a strict cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection, such as data brokerage or the uses of big data.

In sum, the Third Circuit’s Wyndham decision, while interesting, in no way alters the fact that the FTC’s existing cybersecurity enforcement program is inadequate and unsound.  Whether through guidelines or formal FTC rules (which carry their own costs, including the risk of establishing inflexible standards that ignore future changes in business conditions and technology), the FTC should provide additional guidance to the private sector, rooted in sound cost-benefit analysis.  The FTC should also be ever mindful of the costs it imposes on the economy (including potential burdens on business innovation) whenever it considers bringing enforcement actions in this area.

4.     Conclusion

The debate over the appropriate scope of federal regulation of business cybersecurity programs will continue to rage, as serious data breaches receive public attention and the FTC considers new initiatives.  Let us hope that, as we move forward, federal regulators will fully take into account costs as well as benefits – including, in particular, the risk that federal overregulation will undermine innovation, harm businesses, and weaken the economy.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Continue Reading...