A bevy of states are racing to mandate “digital choice” in social media. The new bills promise easy data portability and forced interoperability among platforms—letting users carry their accounts, contacts, and content across services through open protocols. Utah enacted the first such law in 2025, and legislatures in Virginia, South Dakota, New York, California, and New Hampshire are now considering similar measures in their 2026 sessions.
The pitch sounds simple: give users control over their information. A closer look tells a different story. The bills never identify a clear market failure. Their interoperability mandates expose nonconsenting users to significant privacy risks. Their artificial-intelligence (AI) provisions do not cohere into a workable regulatory scheme. And lawmakers are moving ahead despite little evidence that consumers actually want social-media interoperability.
Copy-Paste Federalism
As noted above, Utah set the template. Gov. Spencer Cox signed H.B. 418 in March 2025, with an effective date of July 1, 2026. The statute requires full social-media interoperability: users must be able to download and transfer their content and interactions, and platforms must maintain “transparent, third-party-accessible… interfaces using open protocols.”
Lawmakers almost immediately discovered the problems. Utah is already advancing a follow-up measure, H.B. 408, to patch obvious gaps, including adding consent protections for users whose data would be pulled into another user’s transfer. The need to amend the law within a year of enactment should itself raise alarms.
Other states have, nonetheless, quickly followed into the breach. Virginia’s SB 85 passed the Senate 40-0 earlier this month and is now moving through the House. It goes beyond Utah by applying interoperability mandates to AI “model operators,” requiring portability of “contextual data,” including prompts, chat histories, uploaded files, and model-generated inferences.
South Dakota’s SB 111 cleared both chambers, passing the Senate 34-0 and the House of Representatives 62-3. New York has companion bills—A8963 in the Assembly and S7476 in the Senate—pending in committee. California Assemblymember Josh Lowenthal plans AB 2169, which would amend the California Consumer Privacy Act to mandate portability of social-graph and AI contextual data. New Hampshire’s HB 1589 stands as the lone rejection, dying in committee on a 16-0 vote recommending against passage.
The speed is notable. The similarity is more so. Across states, the bills use nearly identical language, structure, and underlying theory.
Portability Without Purpose
The threshold question these bills never answer is simple: what consumer harm are they fixing?
Lawmakers assume platform ecosystems are a barrier to welfare. Users treat them as a benefit. People keep separate identities across services because the services do different things. A pseudonymous discussion on Reddit serves a different purpose than professional networking on LinkedIn or sharing selfies on Instagram. These are not interchangeable experiences trapped behind technical barriers; they are differentiated products users intentionally choose. Forced real-time interoperability would collapse distinctions users actively maintain.
Supporters invoke competition policy and “walled gardens,” often analogizing to telephone-number portability. The comparison fails. Telephone numbers are standardized identifiers within a regulated utility network. Social-media platforms are heterogeneous products built around distinct norms and interactions. The claim that a user’s data should move seamlessly from Reddit to X (formerly Twitter) to Instagram to LinkedIn assumes a functional equivalence that does not exist. A Reddit comment thread organized around pseudonymous, upvote-driven discussion does not resemble an Instagram story or a LinkedIn endorsement. As Gus Hurwitz has observed, the analogy breaks down because social-media content and interactions lack any comparable standardization process.
Implementation makes the problem clearer. What would it mean for a Reddit post—embedded in a topic-based, pseudonymous community—to appear in an Instagram feed optimized for visual engagement or a LinkedIn timeline curated for professional signaling? The content would lose the context that gives it meaning. Interoperability assumes user-generated content has value independent of the platform in which it arose. In social media, that assumption is usually wrong.
Nor is there strong evidence of consumer demand. Users already have options. Existing privacy regimes, including the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act, allow data downloads. Users routinely maintain accounts across multiple services. Multi-homing is the norm, which undermines claims that network effects meaningfully lock users in. The real constraint is not the inability to export data; it is that the data has little value outside the environment that produced it. A follower list from one platform rarely translates into engagement on another.
The economic literature is, at best, ambivalent. Research on China’s mandatory interoperability regime found it provided “very limited convenience to consumers” and “hardly facilitat[ed] entry of small operators,” instead benefiting other large platforms seeking expansion. Geoffrey Manne and Sam Bowman likewise find mixed results in the United Kingdom’s Open Banking initiative—the closest real-world analogue—enabling some fintech entry while creating adverse distributional effects for privacy-conscious consumers.
Your Privacy Is Not Yours Alone
The bills’ most serious flaw is third-party privacy—or, more accurately, the lack of it. Each proposal defines a user’s “social graph” broadly to include connections, posts, comments, reactions, shares, and associated metadata. New York’s bill is typical. It expressly covers “secondary users’ responses to the covered user’s content,” along with the metadata attached to those interactions.
The problem is structural. When User A exports a social graph, the transfer necessarily includes User B’s data—comments, interactions, and relationship information—whether User B consents or not. This is not hypothetical. Utah’s follow-up measure, H.B. 408, tacitly concedes the issue by trying to add consent requirements. The fix does not work. Requiring consent from every affected user would make interoperability practically impossible, because every social interaction involves multiple parties.
The risks multiply once the data leaves the platform. Outside a platform’s controlled environment, the recipient is bound only by its own privacy policy. Virginia’s SB 85 would open a substantial loophole, effectively converting personal information into a portable resource detached from its original context. The recipients may be lightly regulated startups, foreign actors, or firms with minimal privacy commitments. Broad interoperability interfaces also create attractive attack surfaces, while enforcement authorities cannot realistically detect or stop misuse in real time—especially when the actors operate outside the United States.
Moderation Meets the Open Gate
Mandated interoperability would also disrupt content moderation. Platforms have spent billions building detection and removal systems. The Congressional Research Service reports that Meta removes about 90% of violent or graphic content automatically, while Reddit removes roughly 72% through automation. Those tools are platform-specific, trained on each service’s norms, signals, and user behavior. Forcing content to move across platforms means forcing moderation systems to operate outside the environments they were designed to govern.
The “fediverse”—a network of federated social-media servers connected through the ActivityPub protocol—offers a preview. One study found that its 18,000-plus independently operated servers create persistent moderation conflicts. When a user encounters harmful content originating on another server, local moderators often cannot compel action. Federated systems make moderation harder because no single authority can enforce rules across the network.
The implications are serious. A platform such as Facebook could be required to interoperate with services that apply minimal moderation standards, effectively bypassing safeguards its users expect. The risks are especially acute for minors. Platforms that invest in age-based protections could be forced to maintain open channels with services that lack comparable protections altogether.
Port My Prompts, Break the Model
Some proposals go further. Virginia’s SB 85 and California’s proposed AB 2169 extend interoperability mandates to AI “model operators.” They require portability of “contextual data,” including prompts, chat histories, uploaded files, preferences, metadata, and model-generated or inferred information. The provisions suggest little understanding of how AI systems work.
No standard format exists for transferring a user’s full AI interaction history across models. The premise is confused. A conversation with a large language model reflects not just the user’s inputs, but the model’s training data, fine-tuning, and architecture. Moving the transcript does not reproduce the experience; it produces data detached from the system that generated it. Transferring a ChatGPT conversation to Claude or Gemini would leave the receiving model without the context necessary to interpret model-specific outputs.
Near-real-time interoperability would also create operational and security risks. It could expose proprietary model characteristics, degrade performance, and conflict with user expectations about how AI interactions are handled. Requiring disclosure of intermediate “reasoning” processes, for example, would create a substantial vector for extracting proprietary information, similar to recent incidents involving bot-based probing of AI systems. These provisions read less like workable regulation and more like aspirational language appended to a social-media bill to capture the politics of AI governance.
A Solution Still Looking for a Problem
The state “Digital Choice Act” movement shows how attractive rhetoric can mask weak policy. “User empowerment” sounds compelling, but these bills never identify a concrete consumer harm. Instead, they attempt to impose a single technical framework on heterogeneous products that users deliberately treat as different services. Social media is not a standardized utility, and interoperability cannot manufacture equivalence where none exists.
The costs are clearer than the benefits. Exporting a “social graph” inevitably transfers information about nonconsenting users. Open interfaces enlarge security vulnerabilities and create enforcement gaps once data leaves a platform’s controlled environment. Cross-platform data flows would also disrupt content-moderation systems that depend on platform-specific norms and tooling, weakening safeguards that users—including minors—rely on. The AI provisions compound the problem by mandating portability of interaction data that has meaning only within the model that generated it, while creating new risks of proprietary information leakage.
Nor is there strong evidence of consumer demand. Users already multi-home, maintain distinct online identities, and can download their data under existing privacy regimes. The central difficulty is not technical lock-in; it is that user-generated content derives value from context. Moving it does not recreate the experience.
Before adopting sweeping mandates, lawmakers should ask two questions: what market failure requires intervention, and will interoperability actually solve it? So far, the bills answer neither. They promise portability, but deliver privacy risks, moderation conflicts, and technical incoherence—while solving no clearly identified problem.
