Notice & Comment

Section 230, Gonzalez, and the Ghost of FCC Regulation, by Adam Candeub

The Supreme Court will hear oral arguments in Gonzalez v. Google next month. This is the first time the Court will consider Section 230(c)(1) of the Communications Decency Act, the central internet liability statute. The Gonzalez plaintiffs are families of victims of the Paris, Istanbul, and San Bernardino terrorist attacks. They claim that YouTube’s targeted recommendations assisted or aided in the terrorism that led to their loved ones’ murders. Gonzalez presents the question whether YouTube’s targeted recommendations are the platform’s own speech or user speech. If they are YouTube’s speech, then section 230(c)(1) provides no legal protection from this suit. 

What Is At Stake in Gonzalez

But, this case’s implications extend beyond the protections afforded targeted recommendations. Also affected is whether section 230(c)(1) secures the platforms in their censorship powers.

The platforms, such as Google and Facebook, have consistently argued that section 230(c)(1) protects all their editorial discretion and decision-making for all content on their platforms. This would mean, as some courts have ruled, that they are free to de-platform, de-boost, or deceive users in violation of state consumer fraudcontract liability, and even civil rights law. Some have argued that section 230(c)(1)’s purported protection of “editorial discretion” even preempts state antidiscrimination social media laws, like Texas’ H.B. 20. This, however, is rather odd.

For one thing, whereas section 230(c)(2) protects the platforms from “liability,” section 230(c)(1) only bars them from being treated as publishers. So at least according to the text, Google or Facebook cannot claim a general protection from liability under section 230(c)(1). That provision only protects them from being sued in actions, such as for defamation, in which being the publisher is an element of the cause of action, not from actions or fraud, discrimination, etc., in which being a publisher is not an element.

And because section 230(c)(2) expressly protects platforms for eliminating material, section 230(c)(1)’s protection against their being treated as publishers should be understood only to protect them for the information provided by others that they publish, not their decisions to remove any materials. 

So it is striking that the platforms have relied heavily on an overexpansive interpretation of Section 230(c)(1) to protect them from liability for censoring material—precisely the protection offered by section 230(c)(2), not section 230(c)(1). This misreading of the statute is no accident. Section 230(c)(2) offers only narrow protection against liability in the sense of damages, only covers certain types of content, and in any case, may be unconstitutional. The platforms therefore want to tuck its protections into section 230(c)(1).

Although several courts of appeals have embraced this misinterpretation, at least one Supreme Court justice has strongly rejected it. And since the Court will rule on section 230(c)(1)’s meaning in Gonzalez, that case could decide a lot.

What is interesting about this case from an administrative law perspective is that Section 230 was codified into the Telecommunications Act of 1996 which, in turn, was codified into the Communications Act of 1934, one of the federal government’s most important regulatory statutes. The Supreme Court has ruled that the Telecommunications Act must be read in light of the Communications Act—and Section 230 thus incorporates a vast body of regulatory and administrative law. And, as we will see, this arcane regulatory history plays a pivotal role in interpreting Section 230. In particular, the little known definition of “electronic publishing” in Section 274 of the Communications Act undermines the claim for vast Section 230 protection for being treated as a “publisher” that several Google amici claim. Instead, Section 274 demonstrates that “publish” in Section 230(c)(1) must be read narrowly—which reading noscitur a sociis with “speaker”– would only protect platforms acting as primary publishers or speakers, not distributors, of their users’ content.

The Attempt to Turn Section 230(c)(1)’s Modest Protections into a Broad Protection for “Editorial Function” or “Editorial Discretion” 

Section 230(c)(1) does not grant broad protections for platforms’ editorial judgments. Rather, it only protects internet platforms from liability that their users create in the content they post. Section 230(c)(1) is short. It states that platforms—or “interactive computer services,” the statutory term that encompasses platforms such as Facebook or Twitter—cannot be treated “as publisher or speaker . . . of information provided by another” user or entity on the internet. If you libel your friend on Facebook, Section 230(c)(1) protects Facebook, limiting your friend’s legal recourse to suing you. Involving a three-party relationship: plaintiff cannot bring action against platform for liability created another user’s editorial discretion, Section 230(c)(1) mirrors traditional legal rules for telephone and telegraph companies. They too have no legal liability for carrying their users’ unlawful messages.  

The question Gonzalez presents is whether targeted recommendations are user speech or YouTube’s speech. If they are users’ speech, section 230(c)(1) protects Google (as YouTube’s parent company) from being treated as the publisher in causes of action, such as defamation, in which being a publisher matters. According to lower courts, following the interpretation pressed by the platforms, section 230(c)(1) protects them from any and all liability the speech creates. But if the recommendations are YouTube’s own speech, section 230(c)(1) offers no protection. 

As I have argued, to the degree this speech is simply YouTube’s own words, Section 230 does not apply. Whether recommendations are YouTube’s expressive actions is a detailed factual question involving how Google algorithms work and whether they simply transmit user content or reflect Google’s own judgment and particularized message. The Court, therefore, should remand this case to the district court for further discovery, as I wrote in my amicus.

Despite the quantity of amici briefs, they contain few new arguments. Most just argue that Section 230(c)(1) protects all of a platform’s “editorial functions” or “decision-making,” an argument clearly at odds with the statute’s text. Under the expanded “three-prong” test, which Google urges, which rephrases the traditional editorial function test, every decision relating to content a platform makes enjoys complete immunity. Courts have relied upon this mistaken interpretation of section 230 to dismiss plaintiffs’ claims alleging the platforms made fraudulent representations to them concerning how their accounts would be treated and even claims based on breaking explicit contracts and promises. Shockingly, Google’s own lawyers recently represented to a federal court of appeals that Section 230(c)(1) protects their decision to censor speech in favor of gays; in state court, they have argued that the provision allows them to kick off women and religious minorities—all in contravention of civil rights laws.  

This absurd result, reading Section 230(c)(1) to protect all of a platform’s editorial decisions, violates the rule of surplusage, rendering Section 230(c)(2) a nullity and completely upends the statute. The primary purpose of the Communications Decency Act was to overturn the Stratton Oakmont case and give platforms immunity from users’ content even when they content moderate to protect children from obscenity and similar content. Reading Section 230(c)(1) to protect all of a platform’s editorial decisions makes Congress’s more focused immunity in Section 230(c)(2) superfluous. And, indeed, as I have argued, the origin of the traditional editorial function test derives from an out-of-context quotation from the Zeran case and results from the case’s sloppy use of pronoun antecedents. “Traditional editorial function” in Zeran refers to third party’s editorial decisions. 

A New Way to Expand Section 230(c)(1)? The “Common Law” Interpretation of “Publish” and Section 230(f)(2)

Because neither the text of Section 230(c)(1) nor the CDA’s statutory structure immunizes a platform’s “editorial function,” the Internet Scholars Brief, a Supreme Court amicus brief representing a group of leading internet and copyright law professors, and the Computer & Communications Industry Association (CCIA) brief, which represents a group of trade groups, forward a new argument to accomplish what the “traditional editorial function” test does. They use the definition provisions of Section 230, along with the “common law” interpretation of the word “publish,” to get that same result. This new argument ignores Section 230’s text, history—and, most important, omits the statute’s own definition of “publishing” found in 47 U.S.C. § 274, and which, by their own argument applies to Section 230(c)(1). This statutory definition supersedes any claimed common law interpretation.

The Internet Scholars and CCIA start with what the statute defines as “interactive computer services,” the statutorily defined category in Section 230(c)(1) which receives the provision’s protection. Section 230(f)(2) defines “[i]nteractive computer service” to mean “any information service, system, or access software provider that provides or enablescomputer access by multiple users to a computer server” (emphasis added). And, in turn, “access software provider” is defined as “a provider of software . . .or enabling tools that do any one or more of the following: (A) filter, screen, allow, or disallow content; (B) pick, choose, analyze, or digest content; or (C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.” 47 U.S.C. § 230(f)(4).

The Internet Law Scholars and the CCIA briefs argue that Section 230(c)(1)’s use of “publish” somehow includes everything an interactive computer service might do. Under common law, “publish” meant distribute and otherwise disseminate written materials. According to their historical analysis, publish includes all the things in the definition in Section 230(f)(2) and incorporated by reference in Section 230(f)(4). Thus, even though Section 230(c)(1) states it only protects against liability for speaking or publishing third-party content—or as its conference report says, “treating [interactive computer services] as publishers or speakers of content that is not their own because they have restricted access to objectionable material,”[1] it really protects everything in 230(f)(2) and (f)(4), i.e., filtering, screening, allowing, disallowing, picking, choosing, analyzing, digesting, transmitting, receiving, displaying, forwarding, cache-ing, searching, subseting, organizing, reorganizing, or translating content of another internet user.  

As an initial matter of statutory interpretation, the argument conflates a definitional section, Section 230(f)(2), with an operative section, Section 230(c)(1). Regardless of how “interactive computer services” are defined, they only get protection when treated as “speakers or publishers” of “information provided by another.” 47 U.S.C. § 230(c)(1). If Congress wanted to protect them for publishing, it would have said that they shall not be liable for publishing. Instead, Congress said they shall not be treated as publishers—meaning not subject to suits in which being a publisher is an element of the cause of action. If Congress wanted to immunize all the activities that software access providers do, it would simply have said “software access provider shall have no liability in what they do”—i.e., filtering, screening, allowing, disallowing, picking, choosing, analyzing, digesting, transmitting, receiving, displaying, forwarding, cache-ing, searching, subseting, organizing, reorganizing, or translating content all have immunity. 

Congress, in Section 230(c)(1), didn’t do any of that. It, is therefore, inexplicable why the Internet Law Scholars, claim this statutory provision creates “statutorily protected filtering, picking, and choosing” rights for platforms. (p. 16) This is just a weak backdoor argument to the broad “editorial discretion” that the statute does not protect.

Finally, the legislative history of the term “software access provider” demonstrates its definitional purpose—and it had to do with software to block pornography, not censorship rights of platforms. It was also used in the part of the Communications Decency Act that Congress overturned in Reno v. ACLU[2]Unlike the definition of “interactive computer services” in Section 230(f)(2), which broadly refers to “services,” “systems,” and “providers,” the definition of “software access provider” refers to software and tools that “filter” and “screen.” Software access providers are tools for consumers to access the internet and possibly filter it. Consistent with the understanding, the legislative history describes “software access providers” that work with “[o]n-line services and . . .[and that] are liable [under the overturned Section 223] where they are conspirators with, advertise for, are involved in the creation of or knowing distribution of obscene material or indecent material to minors.”[3] Thus, rather than reflect a notion of publish, “software access provider” simply refers to an entity that provides consumer internet access and would have legal consequence for sending obscene materials to children.

The Telecommunications Act of 1996 and Its Administrative and Regulatory History Directly Refute the Common Law Interpretation of Section 230’s Term “Publisher”

To rope in the definitional Section 230(f) into the meaning of “publish” in Section 230(c)(1), the Internet Law Scholars assert, significantly only citing parties’ briefs as evidence, that “publish” in Section 230(c)(1) has a “common law” meaning. And, they then claim that Section 230(f)’s litany of actions are all part of the common law concept of publishing referred to in the definition of “interactive computer service.”

But, the biggest problem with the position of the Internet Law Scholars and CCIA is that Section 230 of the Telecommunications Act already uses “publish” in a definition that contradicts its purported “common law” meaning. And, actually, gives publish a narrow meaning that refers to “speaking” or “primary publishing” and excludes distribution decisions, i.e., “dissemination” and “provision.”

The Internet Law Scholars look to the definition of “interactive computer service,” 47 U.S.C. § 230(f), and find “software access provider” as discussed above. But, the Internet Law Scholars omit that “interactive computer service” also includes the term “information services.” “Information services” is an old term in FCC regulation with origins in the Modified Final Judgement that broke up AT&T in 1984.[4] The Telecommunication Act amended the Communications Act to include the term in its definitions, which states that “information services” are “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing . . . “ 47 U.S.C. § 153(25) (emphasis added). The statute defines “electronic publishing” in the statute: “‘electronic publishing’ means the dissemination, provision, publication, or sale to an unaffiliated entity or person.” 47 U.S.C. § 274(h) (emphasis added).

Thus, as set forth in Section 274, the Telecommunications Act treats “publishing” as simply a subpart of “electronic publishing” and that, significantly, is distinct and different from “dissemination” and “provision.” Rather than adopt a “common law” meaning of publish, which includes “distribution” and perhaps all the things referred to in Section 230(f)(2) and Section 230(f)(4), the Communications Decency Act’s definition of “publish” is quite narrow—and is distinct from “dissemination” and “provision.”

“Electronic publishing,” itself, is an old regulatory term used by the Federal Communications Commission and incorporated into the Bell Modified Final Judgment[5] and then into the Telecommunications Act of 1996. Dating from the 1970s, it was a sort of “moon shot” regulatory term which referred to things that the FCC expected could be done over the telephone lines but which were not yet available given current technology. The FCC issued regulations about this hypothetical service because it feared the old Ma Bell would use its monopoly in the telephone system to dominate the emerging computer industry.[6] “Electronic publishing” referred to things that social media companies now routinely do, such as “allowing users to engage in a variety of banking, securities, and other transactions” as well as “shopping from a home computer;” “offerings of sports, stock, horoscope, and entertainment messages in addition to traditional time and weather”; “access to consumer guides for restaurants, theaters, shopping, transportation, and local events,” and electronic mail.”[7]  

In short, electronic publishing is what Google and social media platforms do—and for which the Internet Law Scholars argue they should have complete immunity. But, due to the definition of “electronic publishing,” Section 230(c)(1)’s use of publish in fact refers to a subpart of “electronic publishing” and is something different from “dissemination” and “provision.” In this light, Section 230(c)(1)’s use of “publisher” refers to something akin to primary publishing—or to use noscitur a sociis on Section 230(c)(1)– “speak[ing].” Section 230(c)(1) only protects platforms from liability resulting from being treated as primary publishers or speakers of third-party content. It provides no protection for platforms’ own editorial decisions that involve the “dissemination” and “provision” of their users’ speech. 

When Facebook Recommends Another Users’ speech, Facebook is speaking; it is not “Publish[ing]” the Content of Another 

Finally, relying on Section 230(f), the Internet Scholar’s and CCIA’s briefs claim that YouTube’s choices in making recommendations and even its own recommending words are not the speech of YouTube. Rather, because according to their argument, everything in Section 230(f) is included in “publish,” Section 230(c)(1) grants immunity to platforms whenever they “filte[r], scree[n], allo[w] . . .pic[k] [or] choos[e]” user content. Thus, whenever Twitter or Facebook talks about another user’s content, it’s not Twitter’s or Facebook’s speech. 

This argument is facially absurd—even beyond its flawed effort to loop in Section 230(f)(2)’s definition into the meaning of “publish.” There is a difference between a telephone company completing your call and YouTube saying, “From what our company knows about you, you’ll like this video on all-expens-paid trips to Pakistani madrassas.” 

In short, there is a difference between mere transmission of content and the platforms’ own speech. Merely transmitted content is not the speech of platforms, and for which they receive Section 230 protection but no First Amendment protection. On the other hand, the platform’s own speech receives no Section 230 protection, but does get First Amendment protection.

Whether YouTube’s algorithmic targeted recommendations are more user speech than dissemination or provision is something the Court cannot decide given the factual record that is devoid of any description of how these algorithms work—besides Google’s own partial description, regrettably amplified by its amici. Certainly, YouTube’s own statements are its own speech and outside of Section 230 protection. On the other hand, other types of prioritization that do not communicate any particularized platform message may not be YouTube’s speech. The Court must remand this case to the district court for further factual development, making clear that Section 230(c)(1) is a limited protection that does not encompass all editorial decisions affecting platforms. 

Adam Candeub is a Professor of Law & Director of the Intellectual Property, Information & Communications Law Program at the Michigan State University College of Law.


[1] S. Conf. Rep. 104-230, at 194 (1996).

[2] Reno v. American Civil Liberties Union, 521 U.S. 844 (1997) (invalidating portions of 47 U.S.C. § 223).

[3] See 142 Cong. Rec. S687, S707 (daily ed. Feb. 1, 1996) (statement of Sen. Coats).

[4] United States v. W. Elec. Co., 569 F. Supp. 1057, 1100 (D.D.C.) (1983).

[5] As part of the litigation that led to the modified final judgment (MFJ), court decisions defined “electronic publishing” as “the provision of any information which a provider or publisher has, or has caused to be originated, authored, compiled, collected, or edited, or in which he has a direct or indirect financial or proprietary interest, and which is disseminated to an unaffiliated person through some electronic means.” United States v. Am. Tel. & Tel. Co., 552 F. Supp. 131, 181 (D.D.C. 1982), aff’d mem. sub nom. Maryland v. United States, 460 U.S. 1001 (1983). 

[6] See, e.g., U S West. Files Section 274 Elec. Publ’g Compliance Rev. Rep., 13 F.C.C. Rcd. 20346 (1998).

[7] Richard E. Wiley, Report on Legal Developments in Electronic Publishing, 27 Jurimetrics 403-422 (Summer 1987).

Print Friendly, PDF & Email