Archives For privacy

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.

Ouch.

Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.

Nearly all economists from across the political spectrum agree: free trade is good. Yet free trade agreements are not always the same thing as free trade. Whether we’re talking about the Trans-Pacific Partnership or the European Union’s Digital Single Market (DSM) initiative, the question is always whether the agreement in question is reducing barriers to trade, or actually enacting barriers to trade into law.

It’s becoming more and more clear that there should be real concerns about the direction the EU is heading with its DSM. As the EU moves forward with the 16 different action proposals that make up this ambitious strategy, we should all pay special attention to the actual rules that come out of it, such as the recent Data Protection Regulation. Are EU regulators simply trying to hogtie innovators in the the wild, wild, west, as some have suggested? Let’s break it down. Here are The Good, The Bad, and the Ugly.

The Good

The Data Protection Regulation, as proposed by the Ministers of Justice Council and to be taken up in trilogue negotiations with the Parliament and Council this month, will set up a single set of rules for companies to follow throughout the EU. Rather than having to deal with the disparate rules of 28 different countries, companies will have to follow only the EU-wide Data Protection Regulation. It’s hard to determine whether the EU is right about its lofty estimate of this benefit (€2.3 billion a year), but no doubt it’s positive. This is what free trade is about: making commerce “regular” by reducing barriers to trade between states and nations.

Additionally, the Data Protection Regulation would create a “one-stop shop” for consumers and businesses alike. Regardless of where companies are located or process personal information, consumers would be able to go to their own national authority, in their own language, to help them. Similarly, companies would need to deal with only one supervisory authority.

Further, there will be benefits to smaller businesses. For instance, the Data Protection Regulation will exempt businesses smaller than a certain threshold from the obligation to appoint a data protection officer if data processing is not a part of their core business activity. On top of that, businesses will not have to notify every supervisory authority about each instance of collection and processing, and will have the ability to charge consumers fees for certain requests to access data. These changes will allow businesses, especially smaller ones, to save considerable money and human capital. Finally, smaller entities won’t have to carry out an impact assessment before engaging in processing unless there is a specific risk. These rules are designed to increase flexibility on the margin.

If this were all the rules were about, then they would be a boon to the major American tech companies that have expressed concern about the DSM. These companies would be able to deal with EU citizens under one set of rules and consumers would be able to take advantage of the many benefits of free flowing information in the digital economy.

The Bad

Unfortunately, the substance of the Data Protection Regulation isn’t limited simply to preempting 28 bad privacy rules with an economically sensible standard for Internet companies that rely on data collection and targeted advertising for their business model. Instead, the Data Protection Regulation would set up new rules that will impose significant costs on the Internet ecosphere.

For instance, giving citizens a “right to be forgotten” sounds good, but it will considerably impact companies built on providing information to the world. There are real costs to administering such a rule, and these costs will not ultimately be borne by search engines, social networks, and advertisers, but by consumers who ultimately will have to find either a different way to pay for the popular online services they want or go without them. For instance, Google has had to hire a large “team of lawyers, engineers and paralegals who have so far evaluated over half a million URLs that were requested to be delisted from search results by European citizens.”

Privacy rights need to be balanced with not only economic efficiency, but also with the right to free expression that most European countries hold (though not necessarily with a robust First Amendment like that in the United States). Stories about the right to be forgotten conflicting with the ability of journalists to report on issues of public concern make clear that there is a potential problem there. The Data Protection Regulation does attempt to balance the right to be forgotten with the right to report, but it’s not likely that a similar rule would survive First Amendment scrutiny in the United States. American companies accustomed to such protections will need to be wary operating under the EU’s standard.

Similarly, mandating rules on data minimization and data portability may sound like good design ideas in light of data security and privacy concerns, but there are real costs to consumers and innovation in forcing companies to adopt particular business models.

Mandated data minimization limits the ability of companies to innovate and lessens the opportunity for consumers to benefit from unexpected uses of information. Overly strict requirements on data minimization could slow down the incredible growth of the economy from the Big Data revolution, which has provided a plethora of benefits to consumers from new uses of information, often in ways unfathomable even a short time ago. As an article in Harvard Magazine recently noted,

The story [of data analytics] follows a similar pattern in every field… The leaders are qualitative experts in their field. Then a statistical researcher who doesn’t know the details of the field comes in and, using modern data analysis, adds tremendous insight and value.

And mandated data portability is an overbroad per se remedy for possible exclusionary conduct that could also benefit consumers greatly. The rule will apply to businesses regardless of market power, meaning that it will also impair small companies with no ability to actually hurt consumers by restricting their ability to take data elsewhere. Aside from this, multi-homing is ubiquitous in the Internet economy, anyway. This appears to be another remedy in search of a problem.

The bad news is that these rules will likely deter innovation and reduce consumer welfare for EU citizens.

The Ugly

Finally, the Data Protection Regulation suffers from an ugly defect: it may actually be ratifying a form of protectionism into the rules. Both the intent and likely effect of the rules appears to be to “level the playing field” by knocking down American Internet companies.

For instance, the EU has long allowed flexibility for US companies operating in Europe under the US-EU Safe Harbor. But EU officials are aiming at reducing this flexibility. As the Wall Street Journal has reported:

For months, European government officials and regulators have clashed with the likes of Google, Amazon.com and Facebook over everything from taxes to privacy…. “American companies come from outside and act as if it was a lawless environment to which they are coming,” [Commissioner Reding] told the Journal. “There are conflicts not only about competition rules but also simply about obeying the rules.” In many past tussles with European officialdom, American executives have countered that they bring innovation, and follow all local laws and regulations… A recent EU report found that European citizens’ personal data, sent to the U.S. under Safe Harbor, may be processed by U.S. authorities in a way incompatible with the grounds on which they were originally collected in the EU. Europeans allege this harms European tech companies, which must play by stricter rules about what they can do with citizens’ data for advertising, targeting products and searches. Ms. Reding said Safe Harbor offered a “unilateral advantage” to American companies.

Thus, while “when in Rome…” is generally good advice, the Data Protection Regulation appears to be aimed primarily at removing the “advantages” of American Internet companies—at which rent-seekers and regulators throughout the continent have taken aim. As mentioned above, supporters often name American companies outright in the reasons for why the DSM’s Data Protection Regulation are needed. But opponents have noted that new regulation aimed at American companies is not needed in order to police abuses:

Speaking at an event in London, [EU Antitrust Chief] Ms. Vestager said it would be “tricky” to design EU regulation targeting the various large Internet firms like Facebook, Amazon.com Inc. and eBay Inc. because it was hard to establish what they had in common besides “facilitating something”… New EU regulation aimed at reining in large Internet companies would take years to create and would then address historic rather than future problems, Ms. Vestager said. “We need to think about what it is we want to achieve that can’t be achieved by enforcing competition law,” Ms. Vestager said.

Moreover, of the 15 largest Internet companies, 11 are American and 4 are Chinese. None is European. So any rules applying to the Internet ecosphere are inevitably going to disproportionately affect these important, US companies most of all. But if Europe wants to compete more effectively, it should foster a regulatory regime friendly to Internet business, rather than extend inefficient privacy rules to American companies under the guise of free trade.

Conclusion

Near the end of the The Good, the Bad, and the Ugly, Blondie and Tuco have this exchange that seems apropos to the situation we’re in:

Bloeastwoodndie: [watching the soldiers fighting on the bridge] I have a feeling it’s really gonna be a good, long battle.
Tuco: Blondie, the money’s on the other side of the river.
Blondie: Oh? Where?
Tuco: Amigo, I said on the other side, and that’s enough. But while the Confederates are there we can’t get across.
Blondie: What would happen if somebody were to blow up that bridge?

The EU’s DSM proposals are going to be a good, long battle. But key players in the EU recognize that the tech money — along with the services and ongoing innovation that benefit EU citizens — is really on the other side of the river. If they blow up the bridge of trade between the EU and the US, though, we will all be worse off — but Europeans most of all.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Rate this:

Continue Reading...
On Wednesday, March 18, our fellow law-and-economics-focused brethren at George Mason’s Law and Economics Center will host a very interesting morning briefing on the intersection of privacy, big data, consumer protection, and antitrust. FTC Commissioner Maureen Ohlhausen will keynote and she will be followed by what looks like will be a lively panel discussion. If you are in DC you can join in person, but you can also watch online. More details below.
Please join the LEC in person or online for a morning of lively discussion on this topic. FTC Commissioner Maureen K. Ohlhausen will set the stage by discussing her Antitrust Law Journal article, “Competition, Consumer Protection and The Right [Approach] To Privacy“. A panel discussion on big data and antitrust, which includes some of the leading thinkers on the subject, will follow.
Other featured speakers include:

Allen P. Grunes
Founder, The Konkurrenz Group and Data Competition Institute

Andres Lerner
Executive Vice President, Compass Lexecon

Darren S. Tucker
Partner, Morgan Lewis

Nathan Newman
Director, Economic and Technology Strategies LLC

Moderator: James C. Cooper
Director, Research and Policy, Law & Economics Center

A full agenda is available click here.

UPDATE: I’ve been reliably informed that Vint Cerf coined the term “permissionless innovation,” and, thus, that he did so with the sorts of private impediments discussed below in mind rather than government regulation. So consider the title of this post changed to “Permissionless innovation SHOULD not mean ‘no contracts required,'” and I’ll happily accept that my version is the “bastardized” version of the term. Which just means that the original conception was wrong and thank god for disruptive innovation in policy memes!

Can we dispense with the bastardization of the “permissionless innovation” concept (best developed by Adam Thierer) to mean “no contracts required”? I’ve been seeing this more and more, but it’s been around for a while. Some examples from among the innumerable ones out there:

Vint Cerf on net neutrality in 2009:

We believe that the vast numbers of innovative Internet applications over the last decade are a direct consequence of an open and freely accessible Internet. Many now-successful companies have deployed their services on the Internet without the need to negotiate special arrangements with Internet Service Providers, and it’s crucial that future innovators have the same opportunity. We are advocates for “permissionless innovation” that does not impede entrepreneurial enterprise.

Net neutrality is replete with this sort of idea — that any impediment to edge providers (not networks, of course) doing whatever they want to do at a zero price is a threat to innovation.

Chet Kanojia (Aereo CEO) following the Aereo decision:

It is troubling that the Court states in its decision that, ‘to the extent commercial actors or other interested entities may be concerned with the relationship between the development and use of such technologies and the Copyright Act, they are of course free to seek action from Congress.’ (Majority, page 17)That begs the question: Are we moving towards a permission-based system for technology innovation?

At least he puts it in the context of the Court’s suggestion that Congress pass a law, but what he really wants is to not have to ask “permission” of content providers to use their content.

Mike Masnick on copyright in 2010:

But, of course, the problem with all of this is that it goes back to creating permission culture, rather than a culture where people freely create. You won’t be able to use these popular or useful tools to build on the works of others — which, contrary to the claims of today’s copyright defenders, is a key component in almost all creativity you see out there — without first getting permission.

Fair use is, by definition, supposed to be “permissionless.” But the concept is hardly limited to fair use, is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright (see, e.g., Mike Masnick again), which otherwise requires those pernicious licenses (i.e., permission) from others.

The point is, when we talk about permissionless innovation for Tesla, Uber, Airbnb, commercial drones, online data and the like, we’re talking (or should be) about ex ante government restrictions on these things — the “permission” at issue is permission from the government, it’s the “permission” required to get around regulatory roadblocks imposed via rent-seeking and baseless paternalism. As Gordon Crovitz writes, quoting Thierer:

“The central fault line in technology policy debates today can be thought of as ‘the permission question,'” Mr. Thierer writes. “Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations?”

But it isn’t (or shouldn’t be) about private contracts.

Just about all human (commercial) activity requires interaction with others, and that means contracts and licenses. You don’t see anyone complaining about the “permission” required to rent space from a landlord. But that some form of “permission” may be required to use someone else’s creative works or other property (including broadband networks) is no different. And, in fact, it is these sorts of contracts (and, yes, the revenue that may come with them) that facilitates people engaging with other commercial actors to produce things of value in the first place. The same can’t be said of government permission.

Don’t get me wrong – there may be some net welfare-enhancing regulatory limits that might require forms of government permission. But the real concern is the pervasive abuse of these limits, imposed without anything approaching a rigorous welfare determination. There might even be instances where private permission, imposed, say, by a true monopolist, might be problematic.

But this idea that any contractual obligation amounts to a problematic impediment to innovation is absurd, and, in fact, precisely backward. Which is why net neutrality is so misguided. Instead of identifying actual, problematic impediments to innovation, it simply assumes that networks threaten edge innovation, without any corresponding benefit and with such certainty (although no actual evidence) that ex ante common carrier regulations are required.

“Permissionless innovation” is a great phrase and, well developed (as Adam Thierer has done), a useful concept. But its bastardization to justify interference with private contracts is unsupported and pernicious.

The Children’s Online Privacy Protection Act (COPPA) continues to be a hot button issue for many online businesses and privacy advocates. On November 14, Senator Markey, along with Senator Kirk and Representatives Barton and Rush introduced the Do Not Track Kids Act of 2013 to amend the statute to include children from 13-15 and add new requirements, like an eraser button. The current COPPA Rule, since the FTC’s recent update went into effect this past summer, requires parental consent before businesses can collect information about children online, including relatively de-identified information like IP addresses and device numbers that allow for targeted advertising.

Often, the debate about COPPA is framed in a way that makes it very difficult to discuss as a policy matter. With the stated purpose of “enhanc[ing] parental involvement in children’s online activities in order to protect children’s privacy,” who can really object? While there is recognition that there are substantial costs to COPPA compliance (including foregone innovation and investment in children’s media), it’s generally taken for granted by all that the Rule is necessary to protect children online. But it has never been clear what COPPA is supposed to help us protect our children from.

Then-Representative Markey’s original speech suggested one possible answer in “protect[ing] children’s safety when they visit and post information on public chat rooms and message boards.” If COPPA is to be understood in this light, the newest COPPA revision from the FTC and the proposed Do Not Track Kids Act of 2013 largely miss the mark. It seems unlikely that proponents worry about children or teens posting their IP address or device numbers online, allowing online predators to look at this information and track them down. Rather, the clear goal animating the updates to COPPA is to “protect” children from online behavioral advertising. Here’s now-Senator Markey’s press statement:

“The speed with which Facebook is pushing teens to share their sensitive, personal information widely and publicly online must spur Congress to act commensurately to put strong privacy protections on the books for teens and parents,” said Senator Markey. “Now is the time to pass the bipartisan Do Not Track Kids Act so that children and teens don’t have their information collected and sold to the highest bidder. Corporations like Facebook should not be profiting from the personal and sensitive information of children and teens, and parents and teens should have the right to control their personal information online.”

The concern about online behavioral advertising could probably be understood in at least three ways, but each of them is flawed.

  1. Creepiness. Some people believe there is something just “creepy” about companies collecting data on consumers, especially when it comes to children and teens. While nearly everyone would agree that surreptitiously collecting data like email addresses or physical addresses without consent is wrong, many would probably prefer to trade data like IP addresses and device numbers for free content (as nearly everyone does every day on the Internet). It is also unclear that COPPA is the answer to this type of problem, even if it could be defined. As Adam Thierer has pointed out, parents are in a much better position than government regulators or even companies to protect their children from privacy practices they don’t like.
  2. Exploitation. Another way to understand the concern is that companies are exploiting consumers by making money off their data without consumers getting any value. But this fundamentally ignores the multi-sided market at play here. Users trade information for a free service, whether it be Facebook, Google, or Twitter. These services then monetize that information by creating profiles and selling that to advertisers. Advertisers then place ads based on that information with the hopes of increasing sales. In the end, though, companies make money only when consumers buy their products. Free content funded by such advertising is likely a win-win-win for everyone involved.
  3. False Consciousness. A third way to understand the concern over behavioral advertising is that corporations can convince consumers to buy things they don’t need or really want through advertising. Much of this is driven by what Jack Calfee called The Fear of Persuasion: many people don’t understand the beneficial effects of advertising in increasing the information available to consumers and, as a result, misdiagnose the role of advertising. Even accepting this false consciousness theory, the difficulty for COPPA is that no one has ever explained why advertising is a harm to children or teens. If anything, online behavioral advertising is less of a harm to teens and children than adults for one simple reason: Children and teens can’t (usually) buy anything! Kids and teens need their parents’ credit cards in order to buy stuff online. This means that parental involvement is already necessary, and has little need of further empowerment by government regulation.

COPPA may have benefits in preserving children’s safety — as Markey once put it — beyond what underlying laws, industry self-regulation and parental involvement can offer. But as we work to update the law, we shouldn’t allow the Rule to be a solution in search of a problem. It is incumbent upon Markey and other supporters of the latest amendment to demonstrate that the amendment will serve to actually protect kids from something they need protecting from. Absent that, the costs very likely outweigh the benefits.

Like most libertarians I’m concerned about government abuse of power. Certainly the secrecy and seeming reach of the NSA’s information gathering programs is worrying. But we can’t and shouldn’t pretend like there are no countervailing concerns (as Gordon Crovitz points out). And we certainly shouldn’t allow the fervent ire of the most radical voices — those who view the issue solely from one side — to impel technology companies to take matters into their own hands. At least not yet.

Rather, the issue is inherently political. And while the political process is far from perfect, I’m almost as uncomfortable with the radical voices calling for corporations to “do something,” without evincing any nuanced understanding of the issues involved.

Frankly, I see this as of a piece with much of the privacy debate that points the finger at corporations for collecting data (and ignores the value of their collection of data) while identifying government use of the data they collect as the actual problem. Typically most of my cyber-libertarian friends are with me on this: If the problem is the government’s use of data, then attack that problem; don’t hamstring corporations and the benefits they confer on consumers for the sake of a problem that is not of their making and without regard to the enormous costs such a solution imposes.

Verizon, unlike just about every other technology company, seems to get this. In a recent speech, John Stratton, head of Verizon’s Enterprise Solutions unit, had this to say:

“This is not a question that will be answered by a telecom executive, this is not a question that will be answered by an IT executive. This is a question that must be answered by societies themselves.”

“I believe this is a bigger issue, and press releases and fizzy statements don’t get at the issue; it needs to be solved by society.

Stratton said that as a company, Verizon follows the law, and those laws are set by governments.

“The laws are not set by Verizon, they are set by the governments in which we operate. I think its important for us to recognise that we participate in debate, as citizens, but as a company I have obligations that I am going to follow.

I completely agree. There may be a problem, but before we deputize corporations in the service of even well-meaning activism, shouldn’t we address this as the political issue it is first?

I’ve been making a version of this point for a long time. As I said back in 2006:

I find it interesting that the “blame” for privacy incursions by the government is being laid at Google’s feet. Google isn’t doing the . . . incursioning, and we wouldn’t have to saddle Google with any costs of protection (perhaps even lessening functionality) if we just nipped the problem in the bud. Importantly, the implication here is that government should not have access to the information in question–a decision that sounds inherently political to me. I’m just a little surprised to hear anyone (other than me) saying that corporations should take it upon themselves to “fix” government policy by, in effect, destroying records.

But at the same time, it makes some sense to look to Google to ameliorate these costs. Google is, after all, responsive to market forces, and (once in a while) I’m sure markets respond to consumer preferences more quickly and effectively than politicians do. And if Google perceives that offering more protection for its customers can be more cheaply done by restraining the government than by curtailing its own practices, then Dan [Solove]’s suggestion that Google take the lead in lobbying for greater legislative protections of personal information may come to pass. Of course we’re still left with the problem of Google and not the politicians bearing the cost of their folly (if it is folly).

As I said then, there may be a role for tech companies to take the lead in lobbying for changes. And perhaps that’s what’s happening. But the impetus behind it — the implicit threats from civil liberties groups, the position that there can be no countervailing benefits from the government’s use of this data, the consistent view that corporations should be forced to deal with these political problems, and the predictable capitulation (and subsequent grandstanding, as Stratton calls it) by these companies is not the right way to go.

I applaud Verizon’s stance here. Perhaps as a society we should come out against some or all of the NSA’s programs. But ideological moralizing and corporate bludgeoning aren’t the way to get there.

I’ll be headed to New Orleans tomorrow to participate in the Federalist Society Faculty Conference and the AALS Annual Meeting.

For those attending and interested, I’ll be speaking at the Fed Soc on privacy and antitrust, and at AALS on Google and antitrust.  Details below.  I hope to see you there!

Federalist Society:

Seven-Minute Presentations of Works in Progress – Part I
Friday, January 4, 5:00 p.m. – 6:00 p.m.
Location: Bacchus Room, Wyndham Riverfront Hotel

  • Prof. Geoffrey Manne, Lewis & Clark School of Law, “Is There a Place for Privacy in Antitrust?”
  • Prof. Zvi Rosen, New York University School of Law, “Discharging Fiduciary Debts in Bankruptcy”
  • Prof. Erin Sheley, George Washington University School of Law, “The Body, the Self, and the Legal Account of Harm”
  • Prof. Scott Shepard, John Marshall Law School, “A Negative Externality by Any Other Name: Using Emissions Caps as Models for Constraining Dead-Weight Costs of Regulation”
  • ModeratorProf. David Olson, Boston College Law School

AALS:

Google and Antitrust
Saturday, January 5, 10:30 a.m. – 12:15 p.m.
Location: Newberry, Third Floor, Hilton New Orleans Riverside

  • Moderator: Michael A. Carrier, Rutgers School of Law – Camden
  • Marina L. Lao, Seton Hall University School of Law
  • Geoffrey A. Manne, Lewis & Clark Law School
  • Frank A. Pasquale, Seton Hall University School of Law
  • Mark R. Patterson, Fordham University School of Law
  • Pamela Samuelson, University of California, Berkeley, School of Law