Archives For blockchain

For many observers, the collapse of the crypto exchange FTX understandably raises questions about the future of the crypto economy, or even of public blockchains as a technology. The topic is high on the agenda of the U.S. Congress this week, with the House Financial Services Committee set for a Dec. 13 hearing with FTX CEO John J. Ray III and founder and former CEO Sam Bankman-Fried, followed by a Dec. 14 hearing of the Senate Banking Committee on “Crypto Crash: Why the FTX Bubble Burst and the Harm to Consumers.”

To some extent, the significance of the FTX case is likely to be exaggerated due to the outsized media attention that Bankman-Fried was able to generate. Nevertheless, many retail and institutional cryptocurrency holders were harmed by FTX and thus both users and policymakers will likely respond to what happened. In this post, I will contrast three perspectives on what may and should happen next for crypto.

‘Centralization Caused the FTX Fiasco’

The first perspective—likely the prevailing view in the crypto community—is that the FTX collapse was a failure of a centralized service, which should be emphatically distinguished from “true” or “crypto-native” decentralized services. The distinction between centralized and decentralized services is sharper in theory than in practice, and it should be seen as a spectrum of decentralization, rather than a simple binary distinction. There is, however, little doubt that crypto-asset exchanges like FTX, which predominantly operate “off-chain” (i.e., on their own servers, not on a public blockchain network) are the paradigmatic case of centralization in the crypto space. They are thus not “decentralized finance” (DeFi), even though much of DeFi today does rely on centralized services—e.g., for price discovery.

As Vivek Ramaswamy and Mark Lurie argued in their Wall Street Journal op-ed, the key feature of a centralized exchange (a “CEX”) “is that somebody (…) takes custody of user funds.” Even when custody is subject to government regulation—as in traditional stock exchanges—custody creates a risk that funds will be misappropriated or otherwise lost by the custodian, as reportedly happened at FTX.

By contrast, no single actor takes custody of customer funds on a decentralized exchange (DEX); these function as smart contracts, self-executing code run on a blockchain like Ethereum. DEX users do, however, face other risks, such as hacks, market manipulation, bugs in code, and situations that combine features of all three. Some of these risks are also present in traditional stock exchanges, but as crypto insiders recognize (see below), the scale and unpredictability of risks like bugs in smart contracts is potentially significant. But as Ramaswamy and Lurie observe, the largest DeFi protocols like “MakerDAO, Compound and Clipper hold more than $15 billion, and their user funds have never been hacked.”

Aside from the lack of custody, DeFi also offers public transparency in two key respects: transparency of the self-executing code powering the DEX and transparency of completed transactions. In contrast, part of what enabled the FTX debacle is that external observers were not able to monitor the financial situation of the centralized exchange. The solution commonly put forward for CEX services on the blockchain—proof of reserves—may not match the transparency that DEX services can offer. Even if a proof-of-reserves requirement provided a reliable, real-time view of an exchange’s assets, it is unlikely to be able to do so for its liabilities. Because it is a business, a CEX always may incur liabilities that are not visible—or not easily visible—on the blockchain, such as liability to pay damages.

Some have proposed that a CEX could establish trust by offering to each user legally binding “proof of insurance” from a reputable insurer. But this simply moves the locus of trust to the insurer, which may or may not be acceptable to users, depending on the circumstances.

‘The Ecosystem Needs Time to Mature Before We Get Even More Attention’

As a critique of today’s centralized crypto services, the first perspective is persuasive. The implication that decentralized solutions offer a fully ready alternative has been called into question, however, both within the crypto space and from the outside. One internal voice of caution has been Ethereum founder Vitalik Buterin, one of crypto’s key thought leaders. Writing shortly before the FTX collapse, Buterin said:

… I don’t think we should be enthusiastically pursuing large institutional capital at full speed. I’m actually kinda happy a lot of the ETFs are getting delayed. The ecosystem needs time to mature before we get even more attention.

He added:

… regulation that leaves the crypto space free to act internally but makes it harder for crypto projects to reach the mainstream is much less bad than regulation that intrudes on how crypto works internally.

Following the FTX collapse, Buterin elaborated on the risks he sees for decentralized crypto services, singling out vulnerabilities in smart-contract code as a major concern.

Buterin’s vision is one of a de facto regulatory sandbox, allowing experimentation and technological development, but combined with restrictions on the expanding integration of crypto with the broader economy.

Centralization Will Stay, but with Heavier Regulation

It is even more understandable that observers who come from traditional finance have reservations about the potential of decentralized services to replace the centralized ones, at least in the near term. One example is JPMorgan’s recent research report. The report predicts that institutional crypto custodians, not DeFi, will benefit the most from FTX’s collapse. According to JPMorgan, this will happen due to, among other factors:

  • Regulatory pressure to unbundle various roles in crypto-finance, such as brokerage-trading, lending, clearing, and custody. The argument is that—by combining trading, clearing, and settlement—DeFi solutions operate more efficiently than centralized services and will thus “face greater scrutiny.”
  • DeFi services being unattractive to large institutional investors because of lower transaction speeds and the public nature of blockchain transaction, both of which run counter to trading history and strategies.

The report listed several other concerns, including smart-contract risks (which Buterin also singled out) and front-running of trades (part of the wider “MEV” extraction phenomenon), which may lead to worse execution prices for a trader.

Those concerns do refer to real issues in DeFi although, as the report notes, there are solutions to address them under active development. But it is also important, when comparing the current state of DeFi to custodial finance, to assess the relative benefits of the latter realistically. For example, the risk of market manipulation in DeFi needs to be contrasted with how opaque custodial services are, creating opportunities for rent extraction at customer expense.

JPMorgan stressed that the likely reaction to the FTX collapse will be increased pressure for heavier regulation of custody of customer funds, transparency requirements and, as noted earlier, unbundling of various roles in crypto-finance. The report’s prediction that, in doing so, policymakers will not be inclined to distinguish between centralized and decentralized services may be accurate, but that would be an unfortunate and unwarranted outcome.

The risks that centralized services pose—due to their lack of transparency and their taking custody of customer funds—do not translate straightforwardly to decentralized services. Regarding unbundling, it should be noted that a key reason for this regulatory solution is to prevent conflicts of interests. But a DEX that operates autonomously according to publicly shared logic (open source code) does not pose the same conflict-of-interest risks that a CEX faces. Decentralized services do face risks and there may be good reasons to seek policy responses to those risks. But the unique features of decentralized services should be appropriately accommodated. Nevertheless, it is admittedly a challenging task, partially due to the difficulty of defining decentralization in the law.

Conclusion

The collapse of FTX was a failure of a centralized model of crypto-asset services. This does not mean that centralized services do not have a future, but more work will need to be done to build stakeholder trust. Moreover, the FTX affair clearly increased the pressure for additional regulation of centralized services, although it is unclear whether it will prompt certain specific regulatory responses.

Just before the FTX collapse, the EU had nearly finalized its Markets in Crypto-Assets (“MiCA”) Regulation that was intended to regulate centralized “crypto-assets service providers.” There is an argument to be made that MiCA might have stopped a situation like that at FTX, but—given the vague general language used in MiCA—whether this would happen in future cases depends chiefly on how regulators implement prudential oversight.

Given the well-known cases of sophisticated regulators failing to prevent harm—e.g., in MF Global and Wirecard—the mere existence of prudential oversight may be insufficient to ground trust in centralized services. Thus, JPMorgan’s thesis that centralized services will benefit from the FTX affair lacks sufficient justification. Perhaps, even without the involvement of regulators, centralized providers will develop mechanisms for reliable transparency—such as “proof of reserves”—although there is a significant risk here of mere “transparency theatre.”

As to decentralized crypto services, the FTX collapse may be a chance for broader adoption, but Buterin’s words of caution should not be dismissed. JPMorgan may also be right to suggest that policymakers will not be inclined to distinguish between centralized and decentralized services and that the pressure for increased regulation will spill over to DeFi. As I noted earlier, however, policymakers would do well to be attentive to the relevant differences. For example, centralized services pose risks due to lack of transparency and their control of customer funds—two significant risks do not necessarily apply to decentralized services. Hence, unbundling of the kind that could be beneficial for centralized services may bring little of value to a DEX, while risking giving up some core benefits of decentralized solutions.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

Source: KC Green

GDPR is officially one year old. How have the first 12 months gone? As you can see from the mix of data and anecdotes below, it appears that compliance costs have been astronomical; individual “data rights” have led to unintended consequences; “privacy protection” seems to have undermined market competition; and there have been large unseen — but not unmeasurable! — costs in forgone startup investment. So, all-in-all, about what we expected.

GDPR cases and fines

Here is the latest data on cases and fines released by the European Data Protection Board:

  • €55,955,871 in fines
    • €50 million of which was a single fine on Google
  • 281,088 total cases
    • 144,376 complaints
    • 89,271 data breach notifications
    • 47,441 other
  • 37.0% ongoing
  • 62.9% closed
  • 0.1% appealed

Unintended consequences of new data privacy rights

GDPR can be thought of as a privacy “bill of rights.” Many of these new rights have come with unintended consequences. If your account gets hacked, the hacker can use the right of access to get all of your data. The right to be forgotten is in conflict with the public’s right to know a bad actor’s history (and many of them are using the right to memory hole their misdeeds). The right to data portability creates another attack vector for hackers to exploit. And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.

Article 15: Right of access

  • “Amazon sent 1,700 Alexa voice recordings to the wrong user following data request” [The Verge / Nick Statt]
  • “Today I discovered an unfortunate consequence of GDPR: once someone hacks into your account, they can request-—and potentially access—all of your data. Whoever hacked into my Spotify account got all of my streaming, song, etc. history simply by requesting it.” [Jean Yang]

Article 17: Right to be forgotten

  • “Since 2016, newspapers in Belgium and Italy have removed articles from their archives under [GDPR]. Google was also ordered last year to stop listing some search results, including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.” [NYT / Adam Satariano]
  • “French scam artist Michael Francois Bujaldon is using the GDPR to attempt to remove traces of his United States District Court case from the internet. He has already succeeded in compelling PacerMonitor to remove his case.” [PlainSite]
  • “In the last 5 days, we’ve had requests under GDPR to delete three separate articles … all about US lawsuits concerning scams committed by Europeans. That ‘right to be forgotten’ is working out just great, huh guys?” [Mike Masnick]

Article 20: Right to data portability

  • Data portability increases the attack surface for bad actors to exploit. In a sense, the Cambridge Analytica scandal was a case of too much data portability.
  • “The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?” [Stratechery / Ben Thompson]
  • “Presumably data portability would be imposed on Facebook’s competitors and potential competitors as well.  That would mean all future competing firms would have to slot their products into a Facebook-compatible template.  Let’s say that 17 years from now someone has a virtual reality social network innovation: does it have to be “exportable” into Facebook and other competitors?  It’s hard to think of any better way to stifle innovation.” [Marginal Revolution / Tyler Cowen]

Article 21: Right to opt out of data processing

  • “[B]y restricting companies from limiting services or increasing prices for consumers who opt-out of sharing personal data, these frameworks enable free riders—individuals that opt out but still expect the same services and price—and undercut access to free content and services.” [ITIF / Alan McQuinn and Daniel Castro]

Compliance costs are astronomical

  • Prior to GDPR going into effect, “PwC surveyed 200 companies with more than 500 employees and found that 68% planned on spending between $1 and $10 million to meet the regulation’s requirements. Another 9% planned to spend more than $10 million. With over 19,000 U.S. firms of this size, total GDPR compliance costs for this group could reach $150 billion.” [Fortune / Daniel Castro and Michael McLaughlin]
  • “[T]he International Association of Privacy Professionals (IAPP) estimates 500,000 European organizations have registered data protection officers (DPOs) within the first year of the General Data Protection Regulation (GDPR). According to a recent IAPP salary survey, the average DPO’s salary in Europe is $88,000.” [IAPP]
  • As of March 20, 2019, 1,129 US news sites are still unavailable in the EU due to GDPR. [Joseph O’Connor]
  • Microsoft had 1,600 engineers working on GDPR compliance. [Microsoft]
  • During a Senate hearing, Keith Enright, Google’s chief privacy officer, estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules. [Quartz / Ashley Rodriguez]
    • However, French authorities ultimately decided Google’s compliance efforts were insufficient: “France fines Google nearly $57 million for first major violation of new European privacy regime” [Washington Post / Tony Romm]
  • “About 220,000 name tags will be removed in Vienna by the end of [2018], the city’s housing authority said. Officials fear that they could otherwise be fined up to $23 million, or about $1,150 per name.” [Washington Post / Rick Noack]
    UPDATE: Wolfie Christl pointed out on Twitter that the order to remove name tags was rescinded after only 11,000 name tags were removed due to public backlash and what Housing Councilor Kathrin Gaal said were “different legal opinions on the subject.”

Tradeoff between privacy regulations and market competition

“On the big guys increasing market share? I don’t believe [the law] will have such a consequence.” Věra Jourová, the European Commissioner for Justice, Consumers and Gender Equality [WSJ / Sam Schechner and Nick Kostov]

“Mentioned GDPR to the head of a European media company. ‘Gift to Google and Facebook, enormous regulatory own-goal.'” [Benedict Evans]

Source: WSJ
  • “Hundreds of companies compete to place ads on webpages or collect data on their users, led by Google, Facebook and their subsidiaries. The European Union’s General Data Protection Regulation, which took effect in May, imposes stiff requirements on such firms and the websites who use them. After the rule took effect in May, Google’s tracking software appeared on slightly more websites, Facebook’s on 7% fewer, while the smallest companies suffered a 32% drop, according to Ghostery, which develops privacy-enhancing web technology.” [WSJ / Greg Ip]
  • Havas SA, one of the world’s largest buyers of ads, says it observed a low double-digit percentage increase in advertisers’ spending through DBM on Google’s own ad exchange on the first day the law went into effect, according to Hossein Houssaini, Havas’s global head of programmatic solutions. On the selling side, companies that help publishers sell ad inventory have seen declines in bids coming through their platforms from Google. Paris-based Smart says it has seen a roughly 50% drop. [WSJ / Nick Kostov and Sam Schechner]
  • “The consequence was that just hours after the law’s enforcement, numerous independent ad exchanges and other vendors watched their ad demand volumes drop between 20 and 40 percent. But with agencies free to still buy demand on Google’s marketplace, demand on AdX spiked. The fact that Google’s compliance strategy has ended up hurting its competitors and redirecting higher demand back to its own marketplace, where it can guarantee it has user consent, has unsettled publishers and ad tech vendors.” [Digiday / Jessica Davies]

Unseen costs of forgone investment & research

  • Startups: One study estimated that venture capital invested in EU startups fell by as much as 50 percent due to GDPR implementation: “Specifically, our findings suggest a $3.38 million decrease in the aggregate dollars raised by EU ventures per state per crude industry category per week, a 17.6% reduction in the number of weekly venture deals, and a 39.6% decrease in the amount raised in an average deal following the rollout of GDPR … We use our results to provide a back-of-the-envelope calculation of a range of job losses that may be incurred by these ventures, which we estimate to be between 3,604 to 29,819 jobs.” [NBER / Jian Jia, Ginger Zhe Jin, and Liad Wagman]
  • Mergers and acquisitions: “55% of respondents said they had worked on deals that fell apart because of concerns about a target company’s data protection policies and compliance with GDPR” [WSJ / Nina Trentmann]
  • Scientific research: “[B]iomedical researchers fear that the EU’s new General Data Protection Regulation (GDPR) will make it harder to share information across borders or outside their original research context.” [Politico / Sarah Wheaton]

GDPR graveyard

Small and medium-sized businesses (SMBs) have left the EU market in droves (or shut down entirely). Here is a partial list:

Blockchain & P2P Services

  • CoinTouch, peer-to-peer cryptocurrency exchange
  • FamilyTreeDNA, free and public genetic tools
    • Mitosearch
    • Ysearch
  • Monal, XMPP chat app
  • Parity, know-your-customer service for initial coin offerings (ICOs)
  • Seznam, social network for students
  • StreetLend, tool sharing platform for neighbors

Marketing

  • Drawbridge, cross-device identity service
  • Klout, social reputation service by Lithium
  • Unroll.me, inbox management app
  • Verve, mobile programmatic advertising

Video Games

Other

Weekend reads

Eric Fruits —  1 June 2018

Good government dies in the darkness. This article is getting a lot of attention on Wonk Twitter and what’s left of the blogosphere. From the abstract:

We examine the effect of local newspaper closures on public finance for local governments. Following a newspaper closure, we find municipal borrowing costs increase by 5 to 11 basis points in the long run …. [T]hese results are not being driven by deteriorating local economic conditions. The loss of monitoring that results from newspaper closures is associated with increased government inefficiencies, including higher likelihoods of costly advance refundings and negotiated issues, and higher government wages, employees, and tax revenues.

What the hell happened at GE? This guy blames Jeff Immelt’s buy-high/sell-low strategy. I blame Jack Welch.

Academic writing is terrible. Science journalist Anna Clemens wants to change that. (Plus she quotes one of my grad school professors, Paul Zak Here’s what Clemens says about turning your research into a story:

But – just as with any Hollywood success in the box office – your paper will not become a page-turner, if you don’t introduce an element of tension now. Your readers want to know what problem you are solving here. So, tell them what gap in the literature needs to be filled, why method X isn’t good enough to solve Y, or what still isn’t known about mechanism Z. To introduce the tension, words such as “however”, “despite”, “nevertheless”, “but”, “although” are your best friends. But don’t fool your readers with general statements, phrase the problem precisely.

Write for the busy reader. While you’re writing your next book, paper, or op-ed, check out what the readability robots think of your writing.

They tell me I’ll get more hits if I mention Bitcoin and blockchain. Um, OK. Here goes. The Seattle Times reports on the mind-blowing amount of power cryptocurrency miners are trying to buy in the electricity-rich Pacific Northwest:

In one case this winter, miners from China landed their private jet at the local airport, drove a rental car to the visitor center at the Rocky Reach Dam, just north of Wenatchee, and, according to Chelan County PUD officials, politely asked to see the “dam master because we want to buy some electricity.”

You will never find a more wretched hive of scum and villainy. The Wild West of regulating cryptocurrencies:

The government must show that the trader intended to artificially affect the price. The Federal District Court in Manhattan once explained that “entering into a legitimate transaction knowing that it will distort the market is not manipulation — only intent, not knowledge, can transform a legitimate transaction into manipulation.”

Tyler Cowen on what’s wrong with the Internet. Hint: It’s you.

And if you hate Twitter, it is your fault for following the wrong people (try hating yourself instead!).  Follow experts and people of substance, not people who seek to lower the status of others.

If that fails, “mute words” is your friend. Muting a few terms made my Twitter experience significantly more enjoyable and informative.

 

mute