Archives For Ed Markey

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Will Rinehart, (Senior Research Fellow, Center for Growth and Opportunity).]

Nellie Bowles, a longtime critic of tech, recently had a change of heart about tech, which she relayed in the New York Times:

Before the coronavirus, there was something I used to worry about. It was called screen time. Perhaps you remember it.

I thought about it. I wrote about it. A lot. I would try different digital detoxes as if they were fad diets, each working for a week or two before I’d be back on that smooth glowing glass.

Now I have thrown off the shackles of screen-time guilt. My television is on. My computer is open. My phone is unlocked, glittering. I want to be covered in screens. If I had a virtual reality headset nearby, I would strap it on.

Bowles isn’t alone. The Washington Post recently documented how social distancing has caused people to “rethink of one of the great villains of modern technology: screens.” Matthew Yglesias of Vox has been critical of tech in the past as well, but recently admitted that these tools are “making our lives much better.” Cal Newport might have called for Twitter to be shut down, but now thinks the service can be useful. These anecdotes speak to a larger trend. According to one national poll, some 88 percent of Americans now have a better appreciation for technology since this pandemic has forced them to rely upon it. 

Before COVID-19, catchy headlines like “Heavy Social Media Use Linked With Mental Health Issues In Teens” and “Have Smartphones Destroyed a Generation?” were met with nods and approvals. These concerns found backing in legislation like Senator Josh Hawley’s “Social Media Addiction Reduction Technology Act” or SMART Act. The opening lines of the SMART Act make it clear the legislation would “prohibit social media companies from using practices that exploit human psychology or brain physiology to substantially impede freedom of choice, [and] to require social media companies to take measures to mitigate the risks of internet addiction and psychological exploitation.”  

Most psychologists steer clear of using the term addiction because it means a person engages in hazardous use, shows tolerance, and neglects social roles. Because social media, gaming, and cell phone use don’t meet this threshold, the profession tends to describe those who experience negative impacts as engaging in problematic use of the tech, which is only applied to a small minority. According to one estimate, for example, only half of a percent of gamers have patterns of problematic use. 

Even though tech use doesn’t meet the criteria for addiction, the term addiction finds purchase in policy discussions and media outlets because it suggests a healthier norm. Computer games have prosocial benefits, yet it is common to hear that the activity is no match for going outside to play. The same kind of argument exists with social media and phone use; face-to-face communication is preferred to tech-enabled communication. 

But the coronavirus has inverted the normal conditions. Social distancing doesn’t allow us to connect in person or play outside with friends. Faced with no other alternative, technology has been embraced. Videoconferencing is up, as is social media use. This new norm has  brought with it a needed rethink of critiques of tech. Even before this moment, however, the research on tech effects has had its problems.    

To begin, even though it has been researched extensively, screen time and social media use aren’t shown to clearly cause harm. Earlier this year, psychologists Candice Odgers and Michaeline Jensen conducted a massive literature review and summarized the research as “a mix of often conflicting small positive, negative and null associations.” The researchers also point out that studies finding a negative relationship between well-being and tech use tend to be correlational, not causational, and thus are “unlikely to be of clinical or practical significance” to parents or therapists.  

Through no fault of their own, researchers tend to focus a limited number of relationships when it comes to tech use. But professors Amy Orben and Andrew Przybylski were able to sidestep these problems by getting computers to test every theoretically defensible hypothesis. In a writeup appropriately titled “Beyond Cherry-Picking,” the duo explained why this method is important to policy makers:

Although statistical significance is often used as an indicator that findings are practically significant, the paper moves beyond this surrogate to put its findings in a real-world context.  In one dataset, for example, the negative effect of wearing glasses on adolescent well-being is significantly higher than that of social media use. Yet policymakers are currently not contemplating pumping billions into interventions that aim to decrease the use of glasses.

Their academic paper throws cold water on the screen time and tech use debate. Since social media explains only 0.4% of the variation in well-being, much greater welfare gains can be made by concentrating on other policy issues. For example, regularly eating breakfast, getting enough sleep, and avoiding marijuana use play much larger roles in the well-being of adolescents. Social media is only a tiny portion of what determines well-being as the chart below helps to illustrate. 

Second, most social media research relies on self-reporting methods, which are systematically biased and often unreliable. Communication professor Michael Scharkow, for example, compared self-reports of Internet use with the computer log files, which show everything that a computer has done and when, and found that “survey data are only moderately correlated with log file data.” A quartet of psychology professors in the UK discovered that self-reported smartphone use and social media addiction scales face similar problems in that they don’t correctly capture reality. Patrick Markey, Professor and Director of the IR Laboratory at Villanova University, summarized the work, “the fear of smartphones and social media was built on a castle made of sand.”  

Expert bodies have also been changing their tune as well. The American Academy of Pediatrics took a hardline stance for years, preaching digital abstinence. But the organization has since backpedaled and now says that screens are fine in moderation. The organization now suggests that parents and children should work together to create boundaries. 

Once this pandemic is behind us, policymakers and experts should reconsider the screen time debate. We need to move from loaded terms like addiction and embrace a more realistic model of the world. The truth is that everyone’s relationship with technology is complicated. Instead of paternalistic legislation, leaders should place the onus on parents and individuals to figure out what is right for them.      

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Christine S. Wilson (Commissioner of the U.S. Federal Trade Commission).[1] The views expressed here are the author’s and do not necessarily reflect those of the Federal Trade Commission or any other Commissioner.]  

I type these words while subject to a stay-at-home order issued by West Virginia Governor James C. Justice II. “To preserve public health and safety, and to ensure the healthcare system in West Virginia is capable of serving all citizens in need,” I am permitted to leave my home only for a limited and precisely enumerated set of reasons. Billions of citizens around the globe are now operating under similar shelter-in-place directives as governments grapple with how to stem the tide of infection, illness and death inflicted by the global Covid-19 pandemic. Indeed, the first response of many governments has been to impose severe limitations on physical movement to contain the spread of the novel coronavirus. The second response contemplated by many, and the one on which this blog post focuses, involves the extensive collection and analysis of data in connection with people’s movements and health. Some governments are using that data to conduct sophisticated contact tracing, while others are using the power of the state to enforce orders for quarantines and against gatherings.

The desire to use modern technology on a broad scale for the sake of public safety is not unique to this moment. Technology is intended to improve the quality of our lives, in part by enabling us to help ourselves and one another. For example, cell towers broadcast wireless emergency alerts to all mobile devices in the area to warn us of extreme weather and other threats to safety in our vicinity. One well-known type of broadcast is the Amber Alert, which enables community members to assist in recovering an abducted child by providing descriptions of the abductor, the abductee and the abductor’s vehicle. Citizens who spot individuals and vehicles that meet these descriptions can then provide leads to law enforcement authorities. A private nonprofit organization, the National Center for Missing and Exploited Children, coordinates with state and local public safety officials to send out Amber Alerts through privately owned wireless carriers.

The robust civil society and free market in the U.S. make partnerships between the private sector and government agencies commonplace. But some of these arrangements involve a much more extensive sharing of Americans’ personal information with law enforcement than the emergency alert system does.

For example, Amazon’s home security product Ring advertises itself not only as a way to see when a package has been left at your door, but also as a way to make communities safer by turning over video footage to local police departments. In 2018, the company’s pilot program in Newark, New Jersey, donated more than 500 devices to homeowners to install at their homes in two neighborhoods, with a big caveat. Ring recipients were encouraged to share video with police. According to Ring, home burglaries in those neighborhoods fell by more than 50% from April through July 2018 relative to the same time period a year earlier.

Yet members of Congress and privacy experts have raised concerns about these partnerships, which now number in the hundreds. After receiving Amazon’s response to his inquiry, Senator Edward Markey highlighted Ring’s failure to prevent police from sharing video footage with third parties and from keeping the video permanently, and Ring’s lack of precautions to ensure that users collect footage only of adults and of users’ own property. The House of Representatives Subcommittee on Economic and Consumer Policy continues to investigate Ring’s police partnerships and data policies. The Electronic Frontier Foundation has called Ring “a perfect storm of privacy threats,” while the UK surveillance camera commissioner has warned against “a very real power to understand, to surveil you in a way you’ve never been surveilled before.”

Ring demonstrates clearly that it is not new for potential breaches of privacy to be encouraged in the name of public safety; police departments urge citizens to use Ring and share the videos with police to fight crime. But emerging developments indicate that, in the fight against Covid-19, we can expect to see more and more private companies placed in the difficult position of becoming complicit in government overreach.

At least mobile phone users can opt out of receiving Amber Alerts, and residents can refuse to put Ring surveillance systems on their property. The Covid-19 pandemic has made some other technological intrusions effectively impossible to refuse. For example, online proctors who monitor students over webcams to ensure they do not cheat on exams taken at home were once something that students could choose to accept if they did not want to take an exam where and when they could be proctored face to face. With public schools and universities across the U.S. closed for the rest of the semester, students who refuse to give private online proctors access to their webcams – and, consequently, the ability to view their surroundings – cannot take exams at all.

Existing technology and data practices already have made the Federal Trade Commission sensitive to potential consumer privacy and data security abuses. For decades, this independent, bipartisan agency has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. It brought its first privacy and data security cases nearly 20 years ago, while I was Chief of Staff to then-Chairman Timothy J. Muris. The FTC took on Eli Lilly for disclosing the e-mail addresses of 669 subscribers to its Prozac reminder service – many of whom were government officials, and at a time of greater stigma for mental health issues – and Microsoft for (among other things) falsely claiming that its Passport website sign-in service did not collect any personally identifiable information other than that described in its privacy policy.

The privacy and data security practices of healthcare and software companies are likely to impact billions of people during the current coronavirus pandemic. The U.S. already has many laws on the books that are relevant to practices in these areas. One notable example is the Health Insurance Portability and Accountability Act, which set national standards for the protection of individually identifiable health information by health plans, health care clearinghouses and health care providers who accept non-cash payments. While the FTC does not enforce HIPAA, it does enforce the Health Breach Notification Rule, as well as the provisions in the FTC Act used to challenge the privacy missteps of Eli Lilly and many other companies.

But technological developments have created gaps in HIPAA enforcement. For example, HIPAA applies to doctors’ offices, hospitals and insurance companies, but it may not apply to wearables, smartphone apps or websites. Yet sensitive medical information is now commonly stored in places other than health care practitioners’ offices.  Your phone and watch now collect information about your blood sugar, exercise habits, fertility and heart health. 

Observers have pointed to these emerging gaps in coverage as evidence of the growing need for federal privacy legislation. I, too, have called on the U.S. Congress to enact comprehensive federal privacy legislation – not only to address these emerging gaps, but for two other reasons.  First, consumers need clarity regarding the types of data collected from them, and how those data are used and shared. I believe consumers can make informed decisions about which goods and services to patronize when they have the information they need to evaluate the costs and benefits of using those goods. Second, businesses need predictability and certainty regarding the rules of the road, given the emerging patchwork of regimes both at home and abroad.

Rules of the road regarding privacy practices will prove particularly instructive during this global pandemic, as governments lean on the private sector for data on the grounds that the collection and analysis of data can help avert (or at least diminish to some extent) a public health catastrophe. With legal lines in place, companies would be better equipped to determine when they are being asked to cross the line for the public good, and whether they should require a subpoena or inform customers before turning over data. It is regrettable that Congress has been unable to enact federal privacy legislation to guide this discussion.

Understandably, Congress does not have privacy at the top of its agenda at the moment, as the U.S. faces a public health crisis. As I write, more than 579,000 Americans have been diagnosed with Covid-19, and more than 22,000 have perished. Sadly, those numbers will only increase. And the U.S. is not alone in confronting this crisis: governments globally have confronted more than 1.77 million cases and more than 111,000 deaths. For a short time, health and safety issues may take precedence over privacy protections. But some of the initiatives to combat the coronavirus pandemic are worrisome. We are learning more every day about how governments are responding in a rapidly developing situation; what I describe in the next section constitutes merely the tip of the iceberg. These initiatives are worth highlighting here, as are potential safeguards for privacy and civil liberties that societies around the world would be wise to embrace.

Some observers view public/private partnerships based on an extensive use of technology and data as key to fighting the spread of Covid-19. For example, Professor Jane Bambauer calls for contact tracing and alerts “to be done in an automated way with the help of mobile service providers’ geolocation data.” She argues that privacy is merely “an instrumental right” that “is meant to achieve certain social goals in fairness, safety and autonomy. It is not an end in itself.” Given the “more vital” interests in health and the liberty to leave one’s house, Bambauer sees “a moral imperative” for the private sector “to ignore even express lack of consent” by an individual to the sharing of information about him.

This proposition troubles me because the extensive data sharing that has been proposed in some countries, and that is already occurring in many others, is not mundane. In the name of advertising and product improvements, private companies have been hoovering up personal data for years. What this pandemic lays bare, though, is that while this trove of information was collected under the guise of cataloguing your coffee preferences and transportation habits, it can be reprocessed in an instant to restrict your movements, impinge on your freedom of association, and silence your freedom of speech. Bambauer is calling for detailed information about an individual’s every movement to be shared with the government when, in the United States under normal circumstances, a warrant would be required to access this information.

Indeed, with our mobile devices acting as the “invisible policeman” described by Justice William O. Douglas in Berger v. New York, we may face “a bald invasion of privacy, far worse than the general warrants prohibited by the Fourth Amendment.” Backward-looking searches and data hoards pose new questions of what constitutes a “reasonable” search. The stakes are high – both here and abroad, citizens are being asked to allow warrantless searches by the government on an astronomical scale, all in the name of public health.  

Abroad

The first country to confront the coronavirus was China. The World Health Organization has touted the measures taken by China as “the only measures that are currently proven to interrupt or minimize transmission chains in humans.” Among these measures are the “rigorous tracking and quarantine of close contacts,” as well as “the use of big data and artificial intelligence (AI) to strengthen contact tracing and the management of priority populations.” An ambassador for China has said his government “optimized the protocol of case discovery and management in multiple ways like backtracking the cell phone positioning.” Much as the Communist Party’s control over China enabled it to suppress early reports of a novel coronavirus, this regime vigorously ensured its people’s compliance with the “stark” containment measures described by the World Health Organization.

Before the Covid-19 pandemic, Hong Kong already had been testing the use of “smart wristbands” to track the movements of prisoners. The Special Administrative Region now monitors people quarantined inside their homes by requiring them to wear wristbands that send information to the quarantined individuals’ smartphones and alert the Department of Health and Police if people leave their homes, break their wristbands or disconnect them from their smartphones. When first announced in early February, the wristbands were required only for people who had been to Wuhan in the past 14 days, but the program rapidly expanded to encompass every person entering Hong Kong. The government denied any privacy concerns about the electronic wristbands, saying the Privacy Commissioner for Personal Data had been consulted about the technology and agreed it could be used to ensure that quarantined individuals remain at home.

Elsewhere in Asia, Taiwan’s Chunghwa Telecom has developed a system that the local CDC calls an “electronic fence.” Specifically, the government obtains the SIM card identifiers for the mobile devices of quarantined individuals and passes those identifiers to mobile network operators, which use phone signals to their cell towers to alert public health and law enforcement agencies when the phone of a quarantined individual leaves a certain geographic range. In response to privacy concerns, the National Communications Commission said the system was authorized by special laws to prevent the coronavirus, and that it “does not violate personal data or privacy protection.” In Singapore, travelers and others issued Stay-Home Notices to remain in their residency 24 hours a day for 14 days must respond within an hour if contacted by government agencies by phone, text message or WhatsApp. And to assist with contact tracing, the government has encouraged everyone in the country to download TraceTogether, an app that uses Bluetooth to identify other nearby phones with the app and tracks when phones are in close proximity.

Israel’s Ministry of Health has launched an app for mobile devices called HaMagen (the shield) to prevent the spread of coronavirus by identifying contacts between diagnosed patients and people who came into contact with them in the 14 days prior to diagnosis. In March, the prime minister’s cabinet initially bypassed the legislative body to approve emergency regulations for obtaining without a warrant the cellphone location data and additional personal information of those diagnosed with or suspected of coronavirus infection. The government will send text messages to people who came into contact with potentially infected individuals, and will monitor the potentially infected person’s compliance with quarantine. The Ministry of Health will not hold this information; instead, it can make data requests to the police and Shin Bet, the Israel Security Agency. The police will enforce quarantine measures and Shin Bet will track down those who came into contact with the potentially infected.

Multiple Eastern European nations with constitutional protections for citizens’ rights of movement and privacy have superseded them by declaring a state of emergency. For example, in Hungary the declaration of a “state of danger” has enabled Prime Minister Viktor Orbán’s government to engage in “extraordinary emergency measures” without parliamentary consent.  His ministers have cited the possibility that coronavirus will prevent a gathering of a sufficient quorum of members of Parliament as making it necessary for the government to be able to act in the absence of legislative approval.

Member States of the European Union must protect personal data pursuant to the General Data Protection Regulation, and communications data, such as mobile location, pursuant to the ePrivacy Directive. The chair of the European Data Protection Board has observed that the ePrivacy Directive enables Member States to introduce legislative measures to safeguard public security. But if those measures allow for the processing of non-anonymized location data from mobile devices, individuals must have safeguards such as a right to a judicial remedy. “Invasive measures, such as the ‘tracking’ of individuals (i.e. processing of historical non-anonymized location data) could be considered proportional under exceptional circumstances and depending on the concrete modalities of the processing.” The EDPB has announced it will prioritize guidance on these issues.

EU Member States are already implementing such public security measures. For example, the government of Poland has by statute required everyone under a quarantine order due to suspected infection to download the “Home Quarantine” smartphone app. Those who do not install and use the app are subject to a fine. The app verifies users’ compliance with quarantine through selfies and GPS data. Users’ personal data will be administered by the Minister of Digitization, who has appointed a data protection officer. Each user’s identification, name, telephone number, quarantine location and quarantine end date can be shared with police and other government agencies. After two weeks, if the user does not report symptoms of Covid-19, the account will be deactivated — but the data will be stored for six years. The Ministry of Digitization claims that it must store the data for six years in case users pursue claims against the government. However, local privacy expert and Panoptykon Foundation cofounder Katarzyna Szymielewicz has questioned this rationale.

Even other countries that are part of the Anglo-American legal tradition are ramping up their use of data and working with the private sector to do so. The UK’s National Health Service is developing a data store that will include online/call center data from NHS Digital and Covid-19 test result data from the public health agency. While the NHS is working with private partner organizations and companies including Microsoft, Palantir Technologies, Amazon Web Services and Google, it has promised to keep all the data under its control, and to require those partners to destroy or return the data “once the public health emergency situation has ended.” The NHS also has committed to meet the requirements of data protection legislation by ensuring that individuals cannot be re-identified from the data in the data store.

Notably, each of the companies partnering with the NHS at one time or another has been subjected to scrutiny for its privacy practices. Some observers have noted that tech companies, which have been roundly criticized for a variety of reasons in recent years, may seek to use this pandemic for “reputation laundering.” As one observer cautioned: “Reputations matter, and there’s no reason the government or citizens should cast bad reputations aside when choosing who to work with or what to share” during this public health crisis.

At home

In the U.S., the federal government last enforced large-scale isolation and quarantine measures during the influenza (“Spanish Flu”) pandemic a century ago. But the Centers for Disease Control and Prevention track diseases on a daily basis by receiving case notifications from every state. The states mandate that healthcare providers and laboratories report certain diseases to the local public health authorities using personal identifiers. In other words, if you test positive for coronavirus, the government will know. Every state has laws authorizing quarantine and isolation, usually through the state’s health authority, while the CDC has authority through the federal Public Health Service Act and a series of presidential executive orders to exercise quarantine and isolation powers for specific diseases, including severe acute respiratory syndromes (a category into which the novel coronavirus falls).

Now local governments are issuing orders that empower law enforcement to fine and jail Americans for failing to practice social distancing. State and local governments have begun arresting and charging people who violate orders against congregating in groups. Rhode Island is requiring every non-resident who enters the state to be quarantined for two weeks, with police checks at the state’s transportation hubs and borders.

How governments discover violations of quarantine and social distancing orders will raise privacy concerns. Police have long been able to enforce based on direct observation of violations. But if law enforcement authorities identify violations of such orders based on data collection rather than direct observation, the Fourth Amendment may be implicated. In Jones and Carpenter, the Supreme Court has limited the warrantless tracking of Americans through GPS devices placed on their cars and through cellphone data. But building on the longstanding practice of contact tracing in fighting infectious diseases such as tuberculosis, GPS data has proven helpful in fighting the spread of Covid-19. This same data, though, also could be used to piece together evidence of violations of stay-at-home orders. As Chief Justice John Roberts wrote in Carpenter, “With access to [cell-site location information], the government can now travel back in time to retrace a person’s whereabouts… Whoever the suspect turns out to be, he has effectively been tailed every moment of every day for five years.”

The Fourth Amendment protects American citizens from government action, but the “reasonable expectation of privacy” test applied in Fourth Amendment cases connects the arenas of government action and commercial data collection. As Professor Paul Ohm of the Georgetown University Law Center notes, “the dramatic expansion of technologically-fueled corporate surveillance of our private lives automatically expands police surveillance too, thanks to the way the Supreme Court has construed the reasonable expectation of privacy test and the third-party doctrine.”

For example, the COVID-19 Mobility Data Network – infectious disease epidemiologists working with Facebook, Camber Systems and Cubiq – uses mobile device data to inform state and local governments about whether social distancing orders are effective. The tech companies give the researchers aggregated data sets; the researchers give daily situation reports to departments of health, but say they do not share the underlying data sets with governments. The researchers have justified this model based on users of the private companies’ apps having consented to the collection and sharing of data.

However, the assumption that consumers have given informed consent to the collection of their data (particularly for the purpose of monitoring their compliance with social isolation measures during a pandemic) is undermined by studies showing the average consumer does not understand all the different types of data that are collected and how their information is analyzed and shared with third parties – including governments. Technology and telecommunications companies have neither asked me to opt into tracking for public health nor made clear how they are partnering with federal, state and local governments. This practice highlights that data will be divulged in ways consumers cannot imagine – because no one assumed a pandemic when agreeing to a company’s privacy policy. This information asymmetry is part of why we need federal privacy legislation.

On Friday afternoon, Apple and Google announced their opt-in Covid-19 contact tracing technology. The owners of the two most common mobile phone operating systems in the U.S. said that in May they would release application programming interfaces that enable interoperability between iOS and Android devices using official contact tracing apps from public health authorities. At an unspecified date, Bluetooth-based contact tracing will be built directly into the operating systems. “Privacy, transparency, and consent are of utmost importance in this effort,” the companies said in their press release.  

At this early stage, we do not yet know exactly how the proposed Google/Apple contact tracing system will operate. It sounds similar to Singapore’s TraceTogether, which is already available in the iOS and Android mobile app stores (it has a 3.3 out of 5 average rating in the former and a 4.0 out of 5 in the latter). TraceTogether is also described as a voluntary, Bluetooth-based system that avoids GPS location data, does not upload information without the user’s consent, and uses changing, encrypted identifiers to maintain user anonymity. Perhaps the most striking difference, at least to a non-technical observer, is that TraceTogether was developed and is run by the Singaporean government, which has been a point of concern for some observers. The U.S. version – like finding abducted children through Amber Alerts and fighting crime via Amazon Ring – will be a partnership between the public and private sectors.     

Recommendations

The global pandemic we now face is driving data usage in ways not contemplated by consumers. Entities in the private and public sector are confronting new and complex choices about data collection, usage and sharing. Organizations with Chief Privacy Officers, Chief Information Security Officers, and other personnel tasked with managing privacy programs are, relatively speaking, well-equipped to address these issues. Despite the extraordinary circumstances, senior management should continue to rely on the expertise and sound counsel of their CPOs and CISOs, who should continue to make decisions based on their established privacy and data security programs. Although developments are unfolding at warp speed, it is important – arguably now, more than ever – to be intentional about privacy decisions.

For organizations that lack experience with privacy and data security programs (and individuals tasked with oversight for these areas), now is a great time to pause, do some research and exercise care. It is essential to think about the longer-term ramifications of choices made about data collection, use and sharing during the pandemic. The FTC offers easily accessible resources, including Protecting Personal Information: A Guide for Business, Start with Security: A Guide for Business, and Stick with Security: A Business Blog Series. While the Gramm-Leach-Bliley Act (GLB) applies only to financial institutions, the FTC’s GLB compliance blog outlines some data security best practices that apply more broadly. The National Institute for Standards and Technology (NIST) also offers security and privacy resources, including a privacy framework to help organizations identify and manage privacy risks. Private organizations such as the Center for Information Policy Leadership, the International Association of Privacy Professionals and the App Association also offer helpful resources, as do trade associations. While it may seem like a suboptimal time to take a step back and focus on these strategic issues, remember that privacy and data security missteps can cause irrevocable harm. Counterintuitively, now is actually the best time to be intentional about choices in these areas.

Best practices like accountability, risk assessment and risk management will be key to navigating today’s challenges. Companies should take the time to assess and document the new and/or expanded risks from the data collection, use and sharing of personal information. It is appropriate for these risk assessments to incorporate potential benefits and harms not only to the individual and the company, but for society as a whole. Upfront assessments can help companies establish controls and incentives to facilitate responsible behavior, as well as help organizations demonstrate that they are fully aware of the impact of their choices (risk assessment) and in control of their impact on people and programs (risk mitigation). Written assessments can also facilitate transparency with stakeholders, raise awareness internally about policy choices and assist companies with ongoing monitoring and enforcement. Moreover, these assessments will facilitate a return to “normal” data practices when the crisis has passed.  

In a similar vein, companies must engage in comprehensive vendor management with respect to the entities that are proposing to use and analyze their data. In addition to vetting proposed data recipients thoroughly, companies must be selective concerning the categories of information shared. The benefits of the proposed research must be balanced against individual protections, and companies should share only those data necessary to achieve the stated goals. To the extent feasible, data should be shared in de-identified and aggregated formats and data recipients should be subject to contractual obligations prohibiting them from re-identification. Moreover, companies must have policies in place to ensure compliance with research contracts, including data deletion obligations and prohibitions on data re-identification, where appropriate. Finally, companies must implement mechanisms to monitor third party compliance with contractual obligations.

Similar principles of necessity and proportionality should guide governments as they make demands or requests for information from the private sector. Governments must recognize the weight with which they speak during this crisis and carefully balance data collection and usage with civil liberties. In addition, governments also have special obligations to ensure that any data collection done by them or at their behest is driven by the science of Covid-19; to be transparent with citizens about the use of data; and to provide due process for those who wish to challenge limitations on their rights. Finally, government actors should apply good data hygiene, including regularly reassessing the breadth of their data collection initiatives and incorporating data retention and deletion policies. 

In theory, government’s role could be reduced as market-driven responses emerge. For example, assuming the existence of universally accessible daily coronavirus testing with accurate results even during the incubation period, Hal Singer’s proposal for self-certification of non-infection among private actors is intriguing. Thom Lambert identified the inability to know who is infected as a “lemon problem;” Singer seeks a way for strangers to verify each other’s “quality” in the form of non-infection.

Whatever solutions we may accept in a pandemic, it is imperative to monitor the coronavirus situation as it improves, to know when to lift the more dire measures. Former Food and Drug Administration Commissioner Scott Gottlieb and other observers have called for maintaining surveillance because of concerns about a resurgence of the virus later this year. For any measures that conflict with Americans’ constitutional rights to privacy and freedom of movement, there should be metrics set in advance for the conditions that will indicate when such measures are no longer justified. In the absence of pre-determined metrics, governments may feel the same temptation as Hungary’s prime minister to keep renewing a “state of danger” that overrides citizens’ rights. As Slovak lawmaker Tomas Valasek has said, “It doesn’t just take the despots and the illiberals of this world, like Orbán, to wreak damage.” But privacy is not merely instrumental to other interests, and we do not have to sacrifice our right to it indefinitely in exchange for safety.

I recognize that halting the spread of the virus will require extensive and sustained effort, and I credit many governments with good intentions in attempting to save the lives of their citizens. But I refuse to accept that we must sacrifice privacy to reopen the economy. It seems a false choice to say that I must sacrifice my Constitutional rights to privacy, freedom of association and free exercise of religion for another’s freedom of movement. Society should demand that equity, fairness and autonomy be respected in data uses, even in a pandemic. To quote Valasek again: “We need to make sure that we don’t go a single inch further than absolutely necessary in curtailing civil liberties in the name of fighting for public health.” History has taught us repeatedly that sweeping security powers granted to governments during an emergency persist long after the crisis has abated. To resist the gathering momentum toward this outcome, I will continue to emphasize the FTC’s learning on appropriate data collection and use. But my remit as an FTC Commissioner is even broader – when I was sworn in on Sept. 26, 2018, I took an oath to “support and defend the Constitution of the United States” – and so I shall.


[1] Many thanks to my Attorney Advisors Pallavi Guniganti and Nina Frant for their invaluable assistance in preparing this article.

It is a bedrock principle underlying the First Amendment that the government may not penalize private speech merely because it disapproves of the message it conveys.

The Federal Circuit handed down a victory for free expression today — in the commercial context no less. At issue was the Lanham Act’s § 2(a) prohibition of trademark registrations that

[c]onsist[] of or comprise[] immoral, deceptive, or scandalous matter; or matter which may disparage or falsely suggest a connection with persons, living or dead, institutions, beliefs, or national symbols, or bring them into contempt, or disrepute.

The court, sitting en banc, held that the “disparaging” provision is an unconstitutional violation of free expression, and that trademarks will indeed be protected by the First Amendment. Although it declined to decide whether the other prohibitions actually violated the First Amendment, the opinion contained a very strong suggestion to future panels that this opinion likely applies in that context as well.

In many respects the opinion was not all that surprising (particularly if you’ve read my thoughts on the subject here and here ). However given that it was a predecessor Court of Customs and Patent Appeals decision, In Re McGinley, that once held that First Amendment concerns were not implicated at all by § 2(a) because “it is clear that the … refusal to register appellant’s mark does not affect his right to use it” — totally ignoring of course the chilling effects on speech — it was by no means certain that this decision would come out correctly decided.

Today’s holding vacated a decision from a three-judge panel that, earlier this year, upheld the ill-fated “disparaging” prohibition. From just a cursory reading of § 2(a), it should be a no-brainer that it clearly implicates the content of speech — if not a particular view point — and should get at least some First Amendment scrutiny. However, the earlier three-judge opinion  gave all of three paragraphs to this consideration — one of which was just a quotation from McGinley. There, the three-judge panel rather tersely concluded that the First Amendment argument was “foreclosed by our precedent.”

Thus it was with pleasure that I read the Federal Circuit as it today acknowledged that “[m]ore than thirty years have passed since the decision in McGinley, and in that time both the McGinley decision and our reliance on it have been widely criticized[.]” The core of the First Amendment analysis is fairly straightforward: barring “disparaging” marks from registration is neither content neutral nor viewpoint neutral, and is therefore subject to strict scrutiny (which it fails). The court notes that McGinley’s First Amendment analysis was “cursory” (to put it mildly), and was decided before a fully developed body of commercial speech doctrine had emerged. Overall, the opinion is a good example of subtle, probing First Amendment analysis, wherein the court really grasps that merely labeling speech as “commercial” does not somehow magically strip away any protected expressive content.

In fact, perhaps the most important and interesting material has to do with this commercial speech analysis. The court acknowledges that the government’s policy against “disparaging” marks is targeting the expressive aspects of trademarks and not the more easily regulable “transactional” aspects (such as product information, pricing, etc.)— to look at § 2(a) otherwise would not make sense as the government is rather explicitly trying to stop certain messages because of their noncommercial aspects. And the court importantly acknowledges the Supreme Court’s admonition that “[a] consumer’s concern for the free flow of commercial speech often may be far keener than his concern for urgent political dialogue” ( although I might go so far as to hazard a guess that commercial speech is more important that political speech, most of the time, to most people, but perhaps I am just cynical).

The upshot of the Federal Circuit’s new view of trademarks and “commercial speech” reinforces the notion that regulations and laws that are directed toward “commercial speech” need to be very narrowly focused on the actual “commercial” message — pricing, source, etc. — and cannot veer into controlling the “expressive” aspects without justification under strict scrutiny. Although there is nothing terrible new or shocking here, the opinion ties together a variety of the commercial speech doctrines, gives much needed clarity to trademark registration, and reaffirms a sensible view of commercial speech law.

And, although I may be reading too deeply based on my preferences, I think the opinion is quietly staking out a useful position for commercial speech cases going forward—at least to a speech maximalist like myself. In particular, it explicitly relies upon the “unconstitutional conditions” doctrine for the proposition that the benefits of government programs cannot be granted upon a condition that a party only engage in “good” or “approved” commercial speech.  As the world becomes increasingly interested in hate speech regulation,  and our college campuses more interested in preparing a generation of”safe spacers” than of critically thinking adults, this will undoubtedly become an important arrow in a speech defender’s quiver.

Scolding teacher

I have small children and, like any reasonably competent parent, I take an interest in monitoring their Internet usage. In particular, I am sensitive to what ad content they are being served and which sites they visit that might try to misuse their information. My son even uses Chromebooks at his elementary school, which underscores this concern for me, as I can’t always be present to watch what he does online. However, also like any other reasonably competent parent, I trust his school and his teacher to make good choices about what he is allowed to do online when I am not there to watch him. And so it is that I am both interested in and rather perplexed by what has EFF so worked up in its FTC complaint alleging privacy “violations” in the “Google for Education” program.

EFF alleges three “unfair or deceptive” acts that would subject Google to remedies under Section 5 of the FTCA: (1) Students logged into “Google for Education” accounts have their non-educational behavior individually tracked (e.g. performing general web searches, browsing YouTube, etc.); (2) the Chromebooks distributed as part of the “Google for Education” program have the “Chrome Sync” feature turned on by default (ostensibly in a terribly diabolical effort to give students a seamless experience between using the Chromebooks at home and at school); and (3) the school administrators running particular instances of “Google for Education” have the ability to share student geolocation information with third-party websites. Each of these violations, claims EFF, violates the K-12 School Service Provider Pledge to Safeguard Student Privacy (“Pledge”) that was authored by the Future of Privacy Forum and Software & Information Industry Association, and to which Google is a signatory. According to EFF, Google included references to its signature in its “Google for Education” marketing materials, thereby creating the expectation in parents that it would adhere to the principles, failed to do so, and thus should be punished.

The TL;DR version: EFF appears to be making some simple interpretational errors — it believes that the scope of the Pledge covers any student activity and data generated while a student is logged into a Google account. As the rest of this post will (hopefully) make clear, however, the Pledge, though ambiguous, is more reasonably read as limiting Google’s obligations to instances where a student is using  Google for Education apps, and does not apply to instances where the student is using non-Education apps — whether she is logged on using her Education account or not.

The key problem, as EFF sees it, is that Google “use[d] and share[d] … student personal information beyond what is needed for education.” So nice of them to settle complex business and educational decisions for the world! Who knew it was so easy to determine exactly what is needed for educational purposes!

Case in point: EFF feels that Google’s use of anonymous and aggregated student data in order to improve its education apps is not an educational purpose. Seriously? How can that not be useful for educational purposes — to improve its educational apps!?

And, according to EFF, the fact that Chrome Sync is ‘on’ by default in the Chromebooks only amplifies the harm caused by the non-Education data tracking because, when the students log in outside of school, their behavior can be correlated with their in-school behavior. Of course, this ignores the fact that the same limitations apply to the tracking — it happens only on non-Education apps. Thus, the Chrome Sync objection is somehow vaguely based on geography. The fact that Google can correlate an individual student’s viewing of a Neil DeGrasse Tyson video in a computer lab at school with her later finishing that video at home is somehow really bad (or so EFF claims).

EFF also takes issue with the fact that school administrators are allowed to turn on a setting enabling third parties to access the geolocation data of Google education apps users.

The complaint is fairly sparse on this issue — and the claim is essentially limited to the assertion that “[s]haring a student’s physical location with third parties is unquestionably sharing personal information beyond what is needed for educational purposes[.]”  While it’s possible that third-parties could misuse student data, a presumption that it is per se outside of any educational use for third-parties to have geolocation access at all strikes me as unreasonable.

Geolocation data, particularly on mobile devices, could allow for any number of positive and negative uses, and without more it’s hard to really take EFF’s premature concern all that seriously. Did they conduct a study demonstrating that geolocation data can serve no educational purpose or that the feature is frequently abused? Sadly, it seems doubtful. Instead, they appear to be relying upon the rather loose definition of likely harm that we have seen in FTC actions in other contexts ( more on this problem here).  

Who decides what ambiguous terms mean?

The bigger issue, however, is the ambiguity latent in the Pledge and how that ambiguity is being exploited to criticize Google. The complaint barely conceals EFF’s eagerness, and gives one the distinct feeling that the Pledge and this complaint are part of a long game. Everyone knows that Google’s entire existence revolves around the clever and innovative employment of large data sets. When Google announced that it was interested in working with schools to provide technology to students, I can only imagine how the anti-big-data-for-any-commercial-purpose crowd sat up and took notice, just waiting to pounce as soon as an opportunity, no matter how tenuous, presented itself.

EFF notes that “[u]nlike Microsoft and numerous other developers of digital curriculum and classroom management software, Google did not initially sign onto the Student Privacy Pledge with the first round of signatories when it was announced in the fall of 2014.” Apparently, it is an indictment of Google that it hesitated to adopt an external statement of privacy principles that was authored by a group that had no involvement with Google’s internal operations or business realities. EFF goes on to note that it was only after “sustained criticism” that Google “reluctantly” signed the pledge. So the company is badgered into signing a pledge that it was reluctant to sign in the first place (almost certainly for exactly these sorts of reasons), and is now being skewered by the proponents of the pledge that it was reluctant to sign. Somehow I can’t help but get the sense that this FTC complaint was drafted even before Google signed the Pledge.

According to the Pledge, Google promised to:

  1. “Not collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes, or as authorized by the parent/student.”
  2. “Not build a personal profile of a student other than for supporting authorized educational/school purposes or as authorized by the parent/student.”
  3. “Not knowingly retain student personal information beyond the time period required to support the authorized educational/school purposes, or as authorized by the parent/student.”

EFF interprets “educational purpose” as anything a student does while logged into her education account, and by extension, any of the even non-educational activity will count as “student personal information.” I think that a fair reading of the Pledge undermines this position, however, and that the correct interpretation of the Pledge is that “educational purpose” and “student personal information” are more tightly coupled such that Google’s ability to collect student data is only circumscribed when the student is actually using the Google for Education Apps.

So what counts as “student personal information” in the pledge? “Student personal information” is “personally identifiable information as well as other information when it is both collected and maintained on an individual level and is linked to personally identifiable information.”  Although this is fairly broad, it is limited by the definition of “Educational/School purposes” which are “services or functions that customarily take place at the direction of the educational institution/agency or their teacher/employee, for which the institutions or agency would otherwise use its own employees, and that aid in the administration or improvement of educational and school activities.” (emphasis added).

This limitation in the Pledge essentially sinks EFF’s complaint. A major part of EFF’s gripe is that when the students interact with non-Education services, Google tracks them. However, the Pledge limits the collection of information only in contexts where “the institutions or agency would otherwise use its own employees” — a definition that clearly does not extend to general Internet usage. This definition would reasonably cover activities like administering classes, tests, and lessons. This definition would not cover activity such as general searches, watching videos on YouTube and the like. Key to EFF’s error is that the pledge is not operative on accounts but around activity — in particular educational activity “for which the institutions or agency would otherwise use its own employees.”

To interpret Google’s activity in the way that EFF does is to treat the Pledge as a promise never to do anything, ever, with the data of a student logged into an education account, whether generated as part of Education apps or otherwise. That just can’t be right. Thinking through the implications of EFF’s complaint, the ultimate end has to be that Google needs to obtain a permission slip from parents before offering access to Google for Education accounts. Administrators and Google are just not allowed to provision any services otherwise.

And here is where the long game comes in. EFF and its peers induced Google to sign the Pledge all the while understanding that their interpretation would necessarily require a re-write of Google’s business model.  But not only is this sneaky, it’s also ridiculous. By way of analogy, this would be similar to allowing parents an individual say over what textbooks or other curricular materials their children are allowed to access. This would either allow for a total veto by a single parent, or else would require certain students to be frozen out of participating in homework and other activities being performed with a Google for Education app. That may work for Yale students hiding from microaggressions, but it makes no sense to read such a contentious and questionable educational model into Google’s widely-offered apps.

I think a more reasonable interpretation should prevail. The privacy pledge is meant to govern the use of student data while that student is acting as a student — which in the case of Google for Education apps would mean while using said apps. Plenty of other Google apps could be used for educational purposes, but Google is intentionally delineating a sensible dividing line in order to avoid exactly this sort of problem (as well as problems that could arise under other laws directed at student activity, like COPPA, most notably). It is entirely unreasonable to presume that Google, by virtue of its socially desirable behavior of enabling students to have ready access to technology, is thereby prevented from tracking individuals’ behavior on non-Education apps as it chooses to define them.

What is the Harm?

According to EFF, there are two primary problems with Google’s gathering and use of student data: gathering and using individual data in non-Education apps, and gathering and using anonymized and aggregated data in the Education apps. So what is the evil end to which Google uses this non-Education gathered data?

“Google not only collects and stores the vast array of student data described above, but uses it for its own purposes such as improving Google products and serving targeted advertising (within non-Education Google services)”

The horrors! Google wants to use student behavior to improve its services! And yes, I get it, everyone hates ads — I hate ads too — but at some point you need to learn to accept that the wealth of nominally free apps available to every user is underwritten by the ad-sphere. So if Google is using the non-Education behavior of students to gain valuable insights that it can monetize and thereby subsidize its services, so what? This is life in the twenty-first century, and until everyone collectively decides that we prefer to pay for services up front, we had better get used to being tracked and monetized by advertisers.

But as noted above, whether you think Google should or shouldn’t be gathering this data, it seems clear that the data generated from use of non-Education apps doesn’t fall under the Pledge’s purview. Thus, perhaps sensing the problems in its non-Education use argument, EFF also half-heartedly attempts to demonize certain data practices that Google employs in the Education context. In short, Google aggregates and anonymizes the usage data of the Google for Education apps, and, according to EFF, this is a violation of the Pledge:

“Aggregating and anonymizing students’ browsing history does not change the intensely private nature of the data … such that Google should be free to use it[.]”

Again the “harm” is that Google actually wants to improve the Educational apps:  “Google has acknowledged that it collects, maintains, and uses student information via Chrome Sync (in aggregated and anonymized form) for the purpose of improving Google products”

This of course doesn’t violate the Pledge. After all, signatories to the Pledge promise only that they will “[n]ot collect, maintain, use or share student personal information beyond that needed for authorized educational/school purposes.” It’s eminently reasonable to include the improvement of the provisioned services as part of an “authorized educational … purpose[.]” And by ensuring that the data is anonymized and aggregated, Google is clearly acknowledging that some limits are appropriate in the education context — that it doesn’t need to collect individual and identifiable personal information for education purposes — but that improving its education products the same way it improves all its products is an educational purpose.

How are the harms enhanced by Chrome Sync? Honestly, it’s not really clear from EFF’s complaint. I believe that the core of EFF’s gripe (at least here) has to do with how the two data gathering activities may be correlated together. Google has ChromeSync enabled by default, so when the students sign on at different locations, the Education apps usage is recorded and grouped (still anonymously) for service improvement alongside non-Education use. And the presence of these two data sets being generated side-by-side creates the potential to track students in the educational capacity by correlating with information generated in their non-educational capacity.

Maybe there are potential flaws in the manner in which the data is anonymized. Obviously EFF thinks anonymized data won’t stay anonymized. That is a contentious view, to say the least, but regardless, it is in no way compelled by the Pledge. But more to the point, merely having both data sets does not do anything that clearly violates the Pledge.

The End Game

So what do groups like EFF actually want? It’s important to consider the effects on social welfare that this approach to privacy takes, and its context. First, the Pledge was overwhelmingly designed for and signed by pure education companies, and not large organizations like Google, Apple, or Microsoft — thus the nature of the Pledge itself is more or less ill-fitted to a multi-faceted business model. If we follow the logical conclusions of this complaint, a company like Google would face an undesirable choice: On the one hand, it can provide hardware to schools at zero cost or heavily subsidized prices, and also provide a suite of useful educational applications. However, as part of this socially desirable donation, it must also place a virtual invisibility shield around students once they’ve signed into their accounts. From that point on, regardless of what service they use — even non-educational ones — Google is prevented from using any data students generate. At this point, one has to question Google’s incentive to remove huge swaths of the population from its ability to gather data. If Google did nothing but provide the hardware, it could simply leave its free services online as-is, and let schools adopt or not adopt them as they wish (subject of course to extant legislation such as COPPA) — thereby allowing itself to possibly collect even more data on the same students.

On the other hand, if not Google, then surely many other companies would think twice before wading into this quagmire, or, when they do, they might offer severely limited services. For instance, one way of complying with EFF’s view of how the Pledge works would be to shut off access to all non-Education services. So, students logged into an education account could only access the word processing and email services, but would be prevented from accessing YouTube, web search and other services — and consequently suffer from a limitation of potentially novel educational options.

EFF goes on to cite numerous FTC enforcement actions and settlements from recent years. But all of the cited examples have one thing in common that the current complaint does not: they all are violations of § 5 for explicit statements or representations made by a company to consumers. EFF’s complaint, on the other hand, is based on a particular interpretation of an ambiguous document generally drafted, and outside of the the complicated business practice at issue. What counts as “student information” when a user employs a general purpose machine for both educational purposes and non-educational purposes?  The Pledge — at least the sections that EFF relies upon in its complaint — is far from clear and doesn’t cover Google’s behavior in an obvious manner.

Of course, the whole complaint presumes that the nature of Google’s services was somehow unfair or deceptive to parents — thus implying that there was at least some material reliance on the Pledge in parental decision making. However, this misses a crucial detail: it is the school administrators who contract with Google for the Chromebooks and Google for Education services, and not the parents or the students.  Then again, maybe EFF doesn’t care and it is, as I suggest above, just interested in a long game whereby it can shoehorn Google’s services into some new sort of privacy regime. This isn’t all that unusual, as we have seen even the White House in other contexts willing to rewrite business practices wholly apart from the realities of privacy “harms.”

But in the end, this approach to privacy is just a very efficient way to discover the lowest common denominator in charity. If it even decides to brave the possible privacy suits, Google and other similarly situated companies will provide the barest access to the most limited services in order to avoid extensive liability from ambiguous pledges. And, perhaps even worse for overall social welfare, using the law to force compliance with voluntarily enacted, ambiguous codes of conduct is a sure-fire way to make sure that there are fewer and more limited codes of conduct in the future.

The Children’s Online Privacy Protection Act (COPPA) continues to be a hot button issue for many online businesses and privacy advocates. On November 14, Senator Markey, along with Senator Kirk and Representatives Barton and Rush introduced the Do Not Track Kids Act of 2013 to amend the statute to include children from 13-15 and add new requirements, like an eraser button. The current COPPA Rule, since the FTC’s recent update went into effect this past summer, requires parental consent before businesses can collect information about children online, including relatively de-identified information like IP addresses and device numbers that allow for targeted advertising.

Often, the debate about COPPA is framed in a way that makes it very difficult to discuss as a policy matter. With the stated purpose of “enhanc[ing] parental involvement in children’s online activities in order to protect children’s privacy,” who can really object? While there is recognition that there are substantial costs to COPPA compliance (including foregone innovation and investment in children’s media), it’s generally taken for granted by all that the Rule is necessary to protect children online. But it has never been clear what COPPA is supposed to help us protect our children from.

Then-Representative Markey’s original speech suggested one possible answer in “protect[ing] children’s safety when they visit and post information on public chat rooms and message boards.” If COPPA is to be understood in this light, the newest COPPA revision from the FTC and the proposed Do Not Track Kids Act of 2013 largely miss the mark. It seems unlikely that proponents worry about children or teens posting their IP address or device numbers online, allowing online predators to look at this information and track them down. Rather, the clear goal animating the updates to COPPA is to “protect” children from online behavioral advertising. Here’s now-Senator Markey’s press statement:

“The speed with which Facebook is pushing teens to share their sensitive, personal information widely and publicly online must spur Congress to act commensurately to put strong privacy protections on the books for teens and parents,” said Senator Markey. “Now is the time to pass the bipartisan Do Not Track Kids Act so that children and teens don’t have their information collected and sold to the highest bidder. Corporations like Facebook should not be profiting from the personal and sensitive information of children and teens, and parents and teens should have the right to control their personal information online.”

The concern about online behavioral advertising could probably be understood in at least three ways, but each of them is flawed.

  1. Creepiness. Some people believe there is something just “creepy” about companies collecting data on consumers, especially when it comes to children and teens. While nearly everyone would agree that surreptitiously collecting data like email addresses or physical addresses without consent is wrong, many would probably prefer to trade data like IP addresses and device numbers for free content (as nearly everyone does every day on the Internet). It is also unclear that COPPA is the answer to this type of problem, even if it could be defined. As Adam Thierer has pointed out, parents are in a much better position than government regulators or even companies to protect their children from privacy practices they don’t like.
  2. Exploitation. Another way to understand the concern is that companies are exploiting consumers by making money off their data without consumers getting any value. But this fundamentally ignores the multi-sided market at play here. Users trade information for a free service, whether it be Facebook, Google, or Twitter. These services then monetize that information by creating profiles and selling that to advertisers. Advertisers then place ads based on that information with the hopes of increasing sales. In the end, though, companies make money only when consumers buy their products. Free content funded by such advertising is likely a win-win-win for everyone involved.
  3. False Consciousness. A third way to understand the concern over behavioral advertising is that corporations can convince consumers to buy things they don’t need or really want through advertising. Much of this is driven by what Jack Calfee called The Fear of Persuasion: many people don’t understand the beneficial effects of advertising in increasing the information available to consumers and, as a result, misdiagnose the role of advertising. Even accepting this false consciousness theory, the difficulty for COPPA is that no one has ever explained why advertising is a harm to children or teens. If anything, online behavioral advertising is less of a harm to teens and children than adults for one simple reason: Children and teens can’t (usually) buy anything! Kids and teens need their parents’ credit cards in order to buy stuff online. This means that parental involvement is already necessary, and has little need of further empowerment by government regulation.

COPPA may have benefits in preserving children’s safety — as Markey once put it — beyond what underlying laws, industry self-regulation and parental involvement can offer. But as we work to update the law, we shouldn’t allow the Rule to be a solution in search of a problem. It is incumbent upon Markey and other supporters of the latest amendment to demonstrate that the amendment will serve to actually protect kids from something they need protecting from. Absent that, the costs very likely outweigh the benefits.

We’ve been discussing the proxy access case, see here,  here, and here, where the DC Circuit overturned an SEC rule for failure to meet its requirement under the National Securities Markets Improvement Act of 1996 to consider the effect of the rule on efficiency, competition, and capital formation in addition to its historical mandate to consider investor protection.  As I mentioned in my last post on the topic, I am working on an article this semester to draw the boundaries for what the four principles require in SEC rule-making.

We haven’t had a good blog confrontation in awhile, so I am glad Jay Brown, our rival at Race To The Bottom, has decided to engage with my efforts.  Jay takes issue with my recent post:

There are several comments to be made about this approach.  First, the post mistakenly assumes that the issue in the case was cost-benefit analysis.  In fact, the primary authority used by the court in Business Roundtable was the SEC’s obligation to analyze the effects of a rule on competition, efficiency and capital raising.  See Section 3(f), 15 USC 78c(f).

Incorrect.  In the view of Congress, cost-benefit analysis is part and parcel of any consideration of efficiency, competition, capital formation, and investor protection for that matter.  His review of the legislative history of the NSMIA is incomplete.  Congress clearly had cost-benefit analysis in mind as it considered the bill.  Consider this statement from Thomas J. Bliley Jr. (R), Chairman of the House Commerce Committee that reported out the bill:

The substitute (the NSMIA)  maintains the provision (the efficiency, competition and capital formation language) requiring cost benefit analysis in SEC rulemaking, which we think is very important in light of the enhanced Congressional role mandated for SEC and SRO rules under the Small Business Regulatory Enforcement Act of 1996. (1996 WL 270857 (F.D.C.H.))
Or consider this from Congressman Jack Fields, the original sponsor of the NSMIA, at the first hearing on the legislation:
On the subject of capital formation: My bill calls for the SEC to consider the promotion of efficiency, competition and capital formation when it makes rules.  This is an important provision of the bill becasue it wil introduce an element of explicit cost benefit anaysis into SEC rulemaking.  We want to encourage the SEC to take efficiency, competition and capital formation into account in its rulemaking.  We view these goals as complimentary to the important goal of investor protection. (1995 WL 706020 (F.D.C.H.))
Jay Brown critiques my argument further:

Second, the comment that cost-benefit analysis “is the only legitimate mode of analysis” reflects JW Verrett’s “policy” perspective but it does not reflect the law.  In adopting Section 3(f), Congress had this to say:

  • “The new section makes clear that matters relating to efficiency, competition, and capital formation are only part of the public interest determination, which also includes, among other things, consideration of the protection of investors.  For 62 years, the foremost mission of the Commission has been investor protection, and this section does not alter the Commission’s mission.”

H. Rep. 104-622, 104th Cong., 2nd Sess., at 39 (June 17, 1996).  See also Section 3(f) in 1996, Pub. L. No. 104-290, § 106(a), 110 Stat. 3416, 3424).  In other words, the efficiency analysis was only one step required of the Commission.  Congress left open the possibility that the goals of investor protection could sometimes override the results of the economic analysis.

He closes by observing that “perhaps JW Verrett could use a course in administrative law.”  I admire Jay’s sharp rhetoric, and I certainly opened the door.

That said, he’s completely off-the-mark on his second counter to my post.  First, to clarify, what I said was: “The legislative history of the securities laws makes clear that the objective is a purely economic one, to stabilize markets, prevent fraud, and maximize economic growth.  The 33′ and 34′ Acts were a response to the stock market crash of 1929 after all.  Cost-benefit analysis is a tough fit in areas where nebulous ideas like justice or equity are at issue (though it is still quite informative) but where as here the underlying objectives are purely economic it is the only legitimate mode of analysis.”

The latter three principles, as we saw above, encompass in large part a wide form of cost-benefit analysis.  What about investor protection?  First we have to consider what investor protection meant in 1933 and 1934, and then we need to consider how it stands in relation to the recent additions.  My review of the legislative history of the 33 and 34 Acts isn’t yet completed because, if you include the Pecora hearings, and I think you should, we’re talking thousands of pages of material.  But there is one theme emerging about how Congress understood the phrase “investor protection” and that is a focus on fraud prevention through disclosure about the value of traded assets and through prohibition of manipulative trading schemes.

During the hearings on the NSMIA, apparently there was concern from then SEC Chairman Arthur Levitt that eliminating “investor protection” as a guiding principle of the SEC would be detrimental.  The compromise that Republicans on the Hill and Chairman Levitt , appointed by President Clinton, reached maintained investor protection as a guiding principle and the three new economic analysis constraints would only apply in cases where the SEC is acting “in the public interest,” a phrase which appears frequently in the 33, 34 and 40 Acts, including Section 10b and Section 14 of the 34 Act.  The “public interest” modifier also responded to concerns from Chairman Levitt that the constraints would apply to enforcement or adjudication decisions.  The compromise is described in a statement from Rep. Ed Markey (D), ranking member of a subcommittee considering the bill:
With respect to the SEC’s mission, we’ve agreed that the promotion of capital formation, competition, and market efficiency must be factors considered whenever the SEC is undertaking a rulemaking based on a public interest determination. This responds to Chairman Levitt’s concern that the original bill might have potentially compromised the SEC’s ability to take actions needed to protect investors.  (1996 WL 134449 (F.D.C.H.))

The investor protection rationale would be limited to rules on disclosure about the value of investments or rules aimed at manipulative trading schemes.  With this limitation, investor protection actually doesn’t apply in many contexts.   In the context of proxy access, that means the SEC could take into account the possibility of fraud in the rule’s requirements on nominee disclosure for example.  Minimization of principal-agent costs doesn’t however fit within the investor protection principle taken in light of its historical meaning.

The investor protection principle does not apply to the Commission’s decision to make the proxy access rule mandatory rather than opt-out or opt-in, which was the issue at the heart of the proxy access challenge.  (One reader comments that the Dodd-Frank Act proxy access section specifically references investor protection.  It does not amend the NSMIA however, so I don’t buy his argument that the DFA language sets the other principles aside or changes the meaning of investor protection from its historical roots).

In contexts where the investor protection rationale does apply, Jay seems to view investor protection as a trump card that could be used to exempt the Commission from the latter three principles.  But the legislative history doesn’t indicate the principles are mutually opposed, and economic analysis is certainly relevant when considering the effects of fraud and the market processes that evolve to remedy fraud.  The more logical view is to consider including investor protection as a principle gives added weight to the cost of fraud in weighing the costs and benefits of new rules.

This discussion will form a substantial portion of the course I am teaching this semester as a Visiting Assistant Professor of Law at Stanford Law School on The Law and Regulation of Financial Institutions.  I will be sure to send Jay links to my course materials.


I posted on the FTC Report findings earlier. In sum, the FTC was able to identify only isolated and sporadic incidences of pricing behavior which were not explained by changes in supply and demand conditions at the local, regional, and national level. In addition, the FTC investigation did not reveal any antitrust violations. The reactions to the FTC’s findings exhibit the expected variance from political pandering, to accusations that the FTC “whitewashed” its report, to boredom from economists (to whom terms like “price gouging” and “unconscionable prices” are foreign). Here are a few examples of what I was able to find in print:

  • “So we’re likelier to see Elvis than gouging on gasoline? That’s good news for everybody, isn’t it? Maybe we’ll keep an eye out, though.” Larry Neal, spokesperson for House Energy and Commerce Committee Chairman Joe Barton (Houston Chronicle).
  • “The Bush administration is uniquely handicapped when it comes to defending the public from price gouging because it doesn’t want to embarrass its friends in Big Oil. This administration’s high-prices-are-good-for-you energy policy depends on leaving Big Oil alone to charge whatever Big Oil has decided to charge.â€? Rep. Edward Markey (D-Malden) (Boston Herald)
  • “Our evidence and common sense suggest a vastly different picture of unconscionable profiteering by Big Oil. The FTC has barely found the tip of the iceberg,” Connecticut’s AG Richard Blumenthal (Washington Post).

  • “Asking the FTC to determine when firms have exercised market power is not likely to yield anything very definitive, and this study hasn’t. It’s not that they’ve concluded with certainty that firms have not exercised market power, only that that is no evidence of it. It is hard to distinguish in this industry between real scarcity and artificial scarcity created by the firms.” Severin Borenstein (Berkeley Economics) (Washington Post).

  • Barbara Boxer said the findings about the refinery “fly in the face of reality,” and that “[t]his report proves that this administration is owned and operated by big oil.” (SF Chronicle).

  • Greg Mankiw (check out his great blog) notes that neither the report findings, nor the reaction by politicians should be very surprising. See also Mankiw’s previous posts on price gouging here and here.

I find the accusations of industry capture thrown at the FTC disturbing. Apparently, these folks do not have any objections as to the merits of the report. Perhaps such objections are are forthcoming, but I’m not holding my breath. It should also be noted several states investigating post-hurricane pricing behavior also concluded that market forces were responsible for the price increases (see, e.g., n. 18 in FTC Commissioner Majoras’ testimony to the Senate which accompanied the report). The burden of proof logically must be placed on the parties arguing that “gouging,” however it is defined, is at the heart of price increases. The FTC report soundly, and strongly, rejects the notion that the burden has been satisfied to date.