Notice & Comment

The platforms should win the NetChoice content moderation cases—but narrowly, by Kyle Langvardt & Alan Z. Rozenshtein

Later this month the Supreme Court will hear First Amendment challenges to two state laws that regulate the content policies of large social media platforms. NetChoice v. Paxton involves a 2021 Texas statute that makes it unlawful for platforms to restrict content based on “viewpoint.” Moody v. NetChoicemeanwhile, involves a 2021 Florida law that prohibits most moderation of “journalistic enterprises” as well as speech by or about political candidates during the campaign season. 

Most of the debate around these laws is pitched in big, blunt terms. NetChoice, the Silicon Valley trade group behind both cases, argues in its briefs that content moderation by social media platforms is the First Amendment equivalent of editorial discretion by a newspaper, and that state regulation of this editorial discretion in either context is anathema. Similarly, writing last month on Notice & Comment, Cato Institution fellow Thomas Berry argued that “forcing every platform to use identical viewpoint-neutral moderation rules would be a profound infringement on the editorial freedom that has produced a range of social media experiences” and that “[b]eing forced to carry, support, or subsidize speech that one opposes is itself a First Amendment injury.”

The laws’ defenders also tend to speak in absolute terms. For example, the Federal Communications Commission’s two Republican commissioners, Brendan Carr and Nathan Simington, rejected, also on this site, the notion that social media platforms have any “unbridled right to censor or otherwise discriminate against other peoples’ speech.” Instead, they argued that the First Amendment allows the government to impose nondiscrimination obligations on large social media platforms. But without some limiting principles—and Carr and Simington did not identify any—the “nondiscrimination” label could hand state governments a power to regulate social media that is nearly as “unbridled” as the power NetChoice would claim for Big Tech under the label of “editorial discretion.” 

Both of these absolutist positions are dangerous. Fortunately, there is no need to decide between them. Rather than deciding whether nondiscrimination or must-carry rules for social media are categorically constitutional or not, we would begin by assessing whether the specific provisions of Texas’s HB20 and Florida’s SB7072 are well-drawn to achieve their stated objectives. 

A poor fit between regulatory means and ends is usually a deal-breaker in First Amendment law. Even at its most lenient, the First Amendment almost always requires that laws burdening speech be “narrowly tailored” to the goals they claim to achieve. This doesn’t always mean that states have to reduce the burden on speech to an absolute minimum, but it does require states to avoid burdening speech in a manner that is “substantially broader than necessary to achieve the government’s interest.” 

The Texas and Florida laws fail this test. Both are so poorly designed that in implementation, they look more likely to undermine than enhance the ability of social media users to communicate on issues of public concern. This is why we have concluded that both laws are unconstitutional. We say this in spite of our view that the states’ stated goals are compelling in principle and the platforms’ freestanding First Amendment interest is weak (for a detailed doctrinal analysis of this point, see this article at 363–70). 

And so, while we are sympathetic to Carr and Simington’s policy goals, we would hold these particular laws unconstitutional, while urging the Supreme Court to rule narrowly on the underlying constitutional issues, so as not to choke off future, better-crafted legislation at the state and federal law aimed at expanding speech on online platforms.

The Texas law pressures platforms to take down controversial speech

HB20, the Texas law, forbids social media platforms to “censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on (1) the viewpoint of the user or another person [or] (2) the viewpoint represented in the user’s expression or another person’s expression.”

The law defines “censorship” broadly. Almost anything a platform might do to make a piece of content less visible or viral counts as “censorship” under Texas’ definition. “Demonetization” – flagging a piece of content as unfit for advertising – also counts as “censorship.” All these practices must be undertaken in a manner that avoids discrimination “based on viewpoint.”

“Based on viewpoint” is a broad term, however, and HB20 does not define it. Presumably “viewpoint” in HB20 means the same thing it means in First Amendment case law. If that is the case, then a prohibition against content moderation “based on viewpoint” means that a platform must act neutrally toward a range of offensive and outright hateful speech. Offensiveness itself, according to the Supreme Court, constitutes a “viewpoint” for First Amendment purposes. Hate speech laws, under First Amendment case law, are considered viewpoint-based and therefore unconstitutional.

Suppose, then, that a platform operating in Texas chose to carry news reporting and commentary on an incident of ethnic violence. In order to observe the “viewpoint-neutrality” rule, the platform would also have to carry and amplify false reports claiming the incident did not take place. On top of that, it would also have to carry and amplify content endorsing ethnic violence. And because Texas’ law defines “demonetization” as a form of censorship, the platform would have to continue placing ads next to all this pro-violence content.

Ad-driven social platforms–and the ones covered by Texas’ law are all ad-driven—have strong business incentives to avoid carrying and associating themselves and their advertising partners with so much hateful, broadly offensive content. This is because their business model depends on (1) inducing users to spend as much time and attention on the platform as possible and (2) selling that user attention to advertisers. When users leave, platforms lose opportunities to market their attention to advertisers. Worse still is when the advertisers themselves leave a platform to avoid being associated publicly with content that threatens their “brand safety.” In either event, the platform loses business.

In our example, the Texas law would allow a platform to avoid these business costs by suppressing all speech on the incident of ethnic violence – true or false, decent or hateful – on a viewpoint-neutral basis. And we expect this is what a profit-maximizing platform would usually do. Platforms’ incentives to carry news on controversial issues, after all—race relations, abortion, gender, or any other of the many fronts of the culture war—are already quite weak. When Canada passed a law requiring social platforms to compensate news outlets for linking to news, Meta simply prohibited news articles on its platforms.

In effect, then, Texas’ ban on viewpoint-based content moderation operates as a “tax” on carriage of any side of a controversial issue. This doesn’t necessarily mean that platforms will always remove controversial speech, but it does mean that they will have clear legal incentives to do so in Texas. 

Laws that produce this effect tend to fare poorly at the Supreme Court, and rightly so. In Miami Herald v. Tornillo, the Supreme Court struck down a “right of reply” statute that would obligate newspapers that criticized a political candidate to print that candidate’s reply. Faced with the cost of publishing this reply, a newspaper might determine that “the wisest course is to avoid controversy.” The same logic applies, likely with greater force, to HB20 and ad-driven social platforms considering the cost of presenting “all sides” of a sensitive issue.

Carr and Simingon would distinguish Tornillo on the theory that newspapers have a weightier expressive interest than platforms do in the speech they publish. We share the view that analogies between newspapers and social platforms are generally superficial. But what Carr and Simington ignore here is that platform users are the ones whose expressive interests will suffer in the clearest sense when platforms remove their speech as part of an effort to comply with HB20.

We are not opposed, in principle, to neutrality obligations for social platforms. But the scope of any neutrality obligation should also be specifically tailored, in clear, specific terms, to ensure platforms do not face incentives to remove valuable speech in an “all sides” takedown. There should be explicit safe harbors for platforms to remove categories of marginal content that reliably threaten the platform’s revenue flow. And platforms should be assured explicitly that they may remove apparent lies without triggering any prohibition on discriminatory content moderation. Otherwise, the obligation to carry lies will stand as a deterrent against carrying truthful factual reporting.

Unfortunately, Texas’ law does not include any limitations or safe harbors to discourage platforms from engaging in “all sides” censorship. The law does not, for example, allow an affirmative defense for platforms that violate the viewpoint-neutrality requirement for reasons of business necessity. Nor does it include any kind of carve-out for false speech. 

Instead, the law allows only four potential carve-outs from the viewpoint-neutrality requirement. First, it allows platforms to censor “expression that … the social media platform is specifically authorized to censor by federal law.” Second, platforms may censor content that “directly incites criminal activity or consists of specific threats of violence against” a list of protected groups. A third carve-out allows platforms to remove content by request from “an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment.” Finally, platforms may censor “unlawful expression.” 

These carve-outs come nowhere close to encompassing the range of “lawful-but-awful” content that drives users and advertisers away. Such content includes racial, ethnic and religious disparagement, terrorist propaganda, holocaust denial, and more. These are all examples of content that social media platforms spend a lot of energy suppressing, and each of them represents one “viewpoint” on an important issue where “both sides” censorship would unacceptably distort and impoverish public discourse.

A well-drawn content moderation law also should not expose platforms to litigation that is more burdensome than necessary to bring a platform’s overall content policies into alignment with the public objective. The more money and effort platforms have to spend defending against claims that they are not moderating in a viewpoint-neutral manner, the less likely they are to carry content that invites dispute.

Meta’s platforms, for example—Facebook, Instagram, and Threads—deal with literally billions of pieces of content a day, and some degree of inconsistency and error in moderation is inevitable. It should therefore take more than a single instance of discriminatory moderation to bring a claim, and some showing of fault should be required. Courts should be particularly skeptical toward laws that increase leverage against defendant platforms through mechanisms such as statutory damages, attorney’s fees, and venue rules. Otherwise, a nondiscrimination rule will function as a more or less automatic penalty for hosting any substantial amount of controversial content. Finally, the law should guard against strategic litigation by civil plaintiffs or state officials to pressure platforms into censoring disfavored content. 

In Texas’ case, the means of enforcing the de facto “tax” on controversial subject matter seems almost calculated to incline platforms toward censoring whole discussions. Users and the state attorney general alike are empowered to seek injunctive relief against platforms whenever even a single violation of the viewpoint-neutrality mandate occurs. In either case, a victorious plaintiff is entitled to attorney’s costs and fees. A platform that defends a case successfully, meanwhile, is not entitled to fees. 

The law also makes it as difficult as possible for platforms to carry their victories forward as precedent. Defendant platforms are barred from invoking nonmutual issue or claim preclusion as a defense–which means a plaintiff who presses and loses a claim against Meta in one suit can force Google to litigate the same claim on identical facts in a second suit. But plaintiffs are not subject to this limitation—meaning that once plaintiff A secures a victory against Meta, plaintiffs B, C, and D are entitled to automatic victories if they can show that their cases are identical to plaintiff A’s. 

The Florida law would hinder platform users’ efforts to communicate effectively

SB7072, the Florida law, has a different set of problems. These mostly relate to provisions in the law that forbid moderation of speech by “journalistic enterprises” and speech related to political candidates during campaign season. 

Like Texas’ viewpoint neutrality requirement, Florida’s rule that platforms must carry journalists and political candidates sounds reasonable, even thoughtful, on the surface. But on a closer look, these provisions are so crudely drawn that in operation, they would severely undermine social media users’ efforts to communicate on social platforms on politics, news, and potentially almost anything else.  

SB7072 prohibits ordinary moderation of speech by or about political candidates that occurs “beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate.” During this period, a platform may not “willfully deplatform”—i.e., suspend the account of—the candidate unless doing so is necessary to comply with another state or federal law. 

On its own, the rule against “deplatforming” political candidates’ accounts during a political campaign strikes us a reasonable safeguard against a potentially serious problem. But Florida’s rule on political candidates goes much further than this in two respects.

First, the law doesn’t just require platforms to carry political accounts; it also says that platforms may not subject candidates’ content to any kind of “post-prioritization” or “shadow banning.” What this means under the definitions provided in the statute is that platforms may not do anything to boost or restrict the visibility of any piece of content published by a candidate. They may not, for example, reduce the number of users who see a video, or lower a post’s ranking in a user’s news feed. 

Critically, SB7072 doesn’t seem to allow platforms to match user preferences regarding candidate-related content. Platforms usually show less candidate-related content to users who always scroll past it. But individualizing a user’s feed in this manner would seem to qualify as “post-prioritization” and/or “shadow banning” as defined in the statute. Even if a user has tapped the “block” button on a candidate’s account, the statute as written seems to make it unlawful for the platform to honor such a user’s request during the campaign season. 

Second, Florida’s ban on “post-prioritization” and “shadow banning” during the political campaign doesn’t just extend to candidates’ own content—it covers all “content or material posted by or about” the candidate during this period.” Read literally, this would seem to indicate that all comments about political candidates made by any user anywhere on the platform are somehow entitled to equal priority in every user’s content feed, even if the speaker has no public profile and no relationship to the listener, and seemingly, even if the listener has previously blocked the speaker’s content. The one exception is for paid political advertising: in return for payment from a candidate or an independent political spender, a platform is free to amplify election-related speech.

Another problem with SB7072 is its provisions regarding journalists: “A social media platform may not take any action to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.” The act’s sole exceptions to this rule are for paid content and obscenity.

“Journalistic enterprise” is defined exclusively in terms of an entity’s audience size and the volume of content it produces. The thresholds for size and volume are set low enough to include conspiracy theorists and extremists whose presence may threaten a platform’s business interests in retaining users and advertisers. And as the Eleventh Circuit noted, the statute’s exclusively size-based definition of “journalistic enterprise” means that “PornHub [would qualify] as a “journalistic enterprise” because it posts more than 100 hours of video and has more than 100 million viewers per year.”

Unlike the Florida law’s facially absolute bans on moderating political candidate-related speech, the bans on moderating journalistic enterprise speech apply only when the moderation is “based on content.” Perhaps this leaves platforms room to moderate speech by “journalistic enterprises” based on non-content considerations, but it is hard to see what these considerations could possibly be in practice—maybe technical considerations such as sound quality. What is clear, however, is that platforms are forbidden to apply any of their general content rules other than obscenity bans to journalistic-enterprise speech. 

Among journalistic enterprises, then, platforms would have no leeway to remove content that users and advertisers find objectionable. And as with the law’s provisions on political candidate speech, it seems that platforms would not be permitted to tailor users’ exposure to “journalistic enterprise” speech based on individual preferences. Nor would platforms even have much ability to inform users that they do not endorse this content—for under the Florida statute, “censor[ship]” includes “post[ing] an addendum to any content or material posted by a user.” 

SB7072’s provisions regarding political and journalist speech threaten user speech interests in two ways. First, they interfere with a platforms’ ability to clear away “noise” that threatens to drown out user expression. Florida’s virtual ban on election-related content moderation practically begs bad actors to “flood the zone with shit,” to use Steve Bannon’s colorful phrase for a strategy to drown truthful political coverage in lies, speculation, and noise. 

Any registered political candidate, no matter how trivial their level of popular support, would have a channel to post unlimited amounts of content on any subject, true or baseless, without suffering an account suspension or encountering any limit on distribution. Non-candidates would have a similar privilege to post endlessly about candidates for office without moderation. Saboteurs could easily render a social platform all but unusable during the campaign season. 

The harm here is not that Florida’s law encourages platforms to remove speech, but that it forces platforms to muffle speech by setting up unnecessary obstructions to clear communication. At worst, rules like these could make the platform close to unusable, impairing user speech in much the same way as if the state had outlawed the platform altogether. For as the media scholar Tarleton Gillespie has written, content moderation “is, in many ways, the commodity that platforms offer.” 

Any must-carry law will produce a degree of what some might consider drowning-out. Audience attention is scarce, after all, and must-carry laws by their nature redistribute a bit of the audience attention that some speakers might otherwise have captured. We do not find such effects inherently troubling, and we think there may well be a place for laws that shield news and political speech to some extent from some types of content moderation. But in order to pass any level of First Amendment scrutiny, any such law will need to incorporate safeguards to ensure that the law does not undermine the interest in free speech that it purports to advance. Laws fail this test if they deprive a platform of the ability to deflect zone-flooding strategies or to clear away content such as spam that clogs the lines of communication. 

A better-designed law might have tried to address these concerns in a few ways. Florida might have allowed statutory safe harbors, for example, to remove what Facebook calls “coordinated inauthentic behavior” by trolls or bots. Or perhaps it is possible to put some kind of quota or cap on the proportion of “must-carry” political or journalistic content that appears in any given user’s feed; cable operators, as a point of comparison, must allocate a fixed quota, but no more, to local broadcasters. Finally, Florida might have extended must-carry privileges to a narrower class of speech and speakers – political candidates, for example, rather than all content about political candidates – to reduce the risk of a platform-crashing deluge of unwanted must-carry content.

A second way that the Florida law undermines user speech on online platforms is by encouraging poor content moderation, which in turn could drive users away from the platform. Many users wish to minimize exposure to political content and journalistic entities in general. And there is probably an even larger group of users who would like to see their social platform moderate content coming from at least some political candidates, at least some social media users with opinions about political candidates, or at least some “journalistic enterprises.” This is particularly true when “journalistic enterprise” is defined in terms broad enough to include at least one video streaming site dedicated to pornographic content. 

Platforms that are forbidden by Florida law to accommodate these users are likely to wither. This is why a for-profit public company like Meta spends $5 billion annually and employs 40,000 people on content moderation, and it is the reason why platforms like X/Twitter that gut their moderation teams, whether to cut costs or out of ideological commitments, often see decreased user engagement. And a diminished user base doesn’t just cut into the platform’s bottom line—it also de-amplifies speech by users of the platform by shrinking their potential audience. 

We don’t mean to suggest, of course, that must-carry laws will always cause platforms to suffer a mass exodus, or that every small loss of user engagement at a platform puts a serious burden on the speech interests of the remaining users. But generally, it is in user speech interests to scrutinize laws that force platforms into what would normally be considered incompetent content moderation from the perspective of audience engagement and retention. A law that purports to enhance free speech on social media, but that in effect virtually ensures a mass exodus from social media, is a law that is too poorly tailored to its own purpose to pass heightened scrutiny.

Better tailoring is possible, though. The audience-scattering aspect of Florida’s law would be mitigated substantially if platforms were allowed to match users’ experience to their content preferences. At the very least, users should be permitted to indicate sources or subject matter they do not wish to follow, and platforms should be permitted to honor that preference. No platform without even a “block” button can retain a sizable audience for long. 

But a “block” button on its own is unlikely to ensure audience retention. Platforms typically take a much more proactive approach that involves following algorithmic inferences about unstated user preferences based on user activity. If a regulated platform is to meet contemporary user preferences, a must-carry law will have to reconcile the must-carry imperative with the need for algorithmic customization. In principle, a law might prohibit burdens that apply across-the-board, limiting a speaker or post’s visibility to all users, while still allowing a platform to engage in individualized “matchmaking” between user and content based on user preferences. In practice, we expect the distinction to be quite a bit fuzzier than it initially appears. 

Conclusion

We’ve focused on the implementation flaws of the Texas and Florida laws because those specific laws are the ones before the Supreme Court.  We emphasize, though, that there’s a broad area of potentially constitutional regulation available between the extremes of these laws on the one hand and NetChoice’s totally laissez-faire alternative on the other. 

Philosophically, we are fully aligned with those that believe that the government has a legitimate role in ensuring that online platforms remain true digital public squares. But unlike some other scholars, we’re not so committed to enabling government regulation of online platforms that we’re willing to hold our noses and excuse the manifest constitutional infirmities of the Texas and Florida laws. We only insist that such government action not undermine the very thing it sets out to accomplish. We hope that the Court strikes down the Texas and Florida laws as violating the First Amendment, but in doing so rejects the platforms’ overbroad First Amendment claims and leaves ample room for smarter, more carefully crafted legislative and regulatory innovation in the years to come.

Kyle Langvardt is an assistant professor of law at the University of Nebraska College of Law. Alan Z. Rozenshtein is an associate professor of law at the University of Minnesota Law School.

Print Friendly, PDF & Email