Archives For Digital Services Act

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

Despite calls from some NGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see? 

But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.

It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.

Interoperability in the DMA

The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:

  1. Mandated interoperability of “ancillary services” (Art 6(1)(f)); 
  2. Real-time data portability (Art 6(1)(h)); and
  3. Business-user access to their own and end-user data (Art 6(1)(i)). 

The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.

However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.

The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.

The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”

Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.

The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.

Why can’t stronger interoperability be safely mandated today?

Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:

1. Data sharing and mining via new APIs;

2. New opportunities for phishing and sock puppetry in a federated ecosystem; and

3. More friction for platforms trying to maintain a secure system.

Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”

There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.

So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.

What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.

The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.