12 Free Speech and Social Media
Content Moderation
Under the state action doctrine, private companies are not limited by the First Amendment in making decisions to moderate or censor content they control. For social media platforms, this extends to expelling individuals who violate the terms of service. Just as you could kick someone using racial slurs out of a gathering at your house, social media platforms can moderate their content to provide a better (and more profitable) experience for users, even if the government could not do the same in the public square. Social media platforms must inevitably engage in some degree of content moderation if they are to remain broadly relevant, as users will abandon the platform if, say, it becomes overrun with pornography, hate speech, fraud, or bots.
At first read, there is no constitutional problem here–social media platforms have property rights, whereas individuals have no right to use any particular private platform. Upon further consideration, however, a problem emerges. If, as Justice Kennedy notes in Packingham, social media has become central to political life, then losing access to such platforms is problematic even if it’s the platforms themselves, rather than the government, that removes one’s access. Framed this way, we now have an important tension between the property or speech rights of the platforms and the need for individuals to have access to the “public square,” even if that square is run by private companies.
It is unusual but not unheard of for courts to treat private actors as public entities. In Smith v. Allwright (1944), the Court held that a “whites-only” Democratic primary election in Texas violated the 14th Amendment even though the Democratic Party was a private organization. Because Texas had essentially delegated a key public function to a private entity, that entity was now subject to the Equal Protection Clause. Similar decisions have been seen in Marsh v. Alabama (1944), where a business-run “company town” was held subject to the First Amendment, as it was open to the public and took on government functions. In this case, Justice Black said, speech rights should trump property rights. For the most part, though, these exceptions remain rare, and courts have been reluctant to expand them. It’s easy to see why: imagine, for example, a ruling that you could not kick a Nazi out of your house for making pro-Nazi speeches because it would violate their right to free speech.
Prior to the 2024 election and the marked shift in social media ownership and norms to the right, the primary axis of conflict in the area of content moderation was between social media platforms and conservatives, who argued they had been de-platformed for their views. In response, both Florida and Texas passed laws that limited the ability of social media platforms to censor speech or deplatform individuals — particularly candidates for office — based on their viewpoints. These laws took the view that, rather than being “compilers” of speech that have their own First Amendment rights (such as a newspaper editor who decides what content will be published), social media platforms are instead common carriers or “dumb pipes” that do not engage in their own expressive activity. Netchoice, a trade group that represents the interests of such platforms as well as free speech on the internet, levied a facial challenge against both laws (meaning they argued the laws were unconstitutional in all applications or to quote one First Amendment case, lack a “plainly legitimate sweep,” United States v. Salerno (1987)).
Note that while the Court decided the case on a technical issue–whether the lower courts had properly applied the correct standard of analysis for a facial challenge–both the majority and Alito’s concurrence (with the judgment) use the opinion as an opportunity to discuss how they would have decided the issue on the merits had they ruled on it.
Moody v. NetChoice
603 US _ (2024)
Facts: Florida and Texas passed laws regulating internet platforms, specifically ones of sufficiently large size, that banned such platforms from “censoring” a user or user’s expression based on viewpoint. Users were also required to be notified as to why they or their content had been removed from the platform. On behalf of its members, Netchoice challenged the law as being facially unconstitutional under the First Amendment, specifically for interfering with the platforms’ own First Amendment rights to curate or edit their platforms (similar to how a newspaper editor might decide what to print or omit). The Eleventh Circuit upheld a lower court’s injunction of the Florida law, while the Fifth Circuit upheld the Texas law on the grounds that the platforms’ content-moderation activities were not “speech” as defined under the Constitution.
Question: Did the lower courts properly conduct a facial challenge analysis?
Vote: No (9-0)
For the Court: Justice Kagan
Concurring opinion: Justice Barrett (omitted)
Concurring opinion: Justice Jackson (omitted)
Concurring in the judgment: Justice Thomas (omitted)
Concurring in the judgment: Justice Alito
JUSTICE KAGAN delivered the opinion of the Court.
Not even thirty years ago, this Court felt the need to explain to the opinion-reading public that the “Internet is an international network of interconnected computers.” Reno v. American Civil Liberties Union (1997). Things have changed since then. At the time, only 40 million people used the internet. Today, Facebook and YouTube alone have over two billion users each. And the public likely no longer needs this Court to define the internet…
… courts still have a necessary role in protecting those entities’ rights of speech, as courts have historically protected traditional media’s rights. To the extent that social-media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products… The principle does not change because the curated compilation has gone from the physical to the virtual world…
Today, we consider whether two statutory laws regulating social-media platforms and other websites facially violate the First Amendment. The laws, from Florida and Texas, restrict the ability of social-media platforms to control whether and how third-party posts are presented to other users. Or otherwise put, the laws limit the platforms’ capacity to engage in content moderation—to filter, prioritize, and label the varied messages, videos, and other content their users wish to post. In addition, though far less addressed in this Court, the laws require a platform to provide an individualized explanation to a user if it removes or alters her posts. NetChoice, an internet trade association, challenged both laws on their face, as a whole, rather than as to particular applications…
Today, we vacate both decisions for reasons separate from the First Amendment merits, because neither Court of Appeals properly considered the facial nature of NetChoice’s challenge…To make that judgment, a court must determine a law’s full set of applications, evaluate which are constitutional and which are not, and compare the one to the other. Neither court performed that necessary inquiry.
To do that right, of course, a court must understand what kind of government actions the First Amendment prohibits. We therefore set out the relevant constitutional principles, and explain how one of the Courts of Appeals failed to follow them. Contrary to what the Fifth Circuit thought, the current record indicates that the Texas law does regulate speech when applied in the way the parties focused on below—when applied, that is, to prevent Facebook (or YouTube) from using its content-moderation standards to remove, alter, organize, prioritize, or disclaim posts in its News Feed (or homepage). The law then prevents exactly the kind of editorial judgments this Court has previously held to receive First Amendment protection… Texas has thus far justified the law as necessary to balance the mix of speech on Facebook’s News Feed and similar platforms; and the record reflects that Texas officials passed it because they thought those feeds skewed against politically conservative voices. But this Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression—to “un-bias” what it thinks biased, rather than to leave such judgments to speakers and their audiences. That principle works for social-media platforms as it does for others…
I
As commonly understood, the term “social media platforms” typically refers to websites and mobile apps that allow users to upload content—messages, pictures, videos, and so on—to share with others. Those viewing the content can then react to it, comment on it, or share it themselves. The biggest social-media companies—entities like Facebook and YouTube—host a staggering amount of content. Facebook users, for example, share more than 100 billion messages every day. And YouTube sees more than 500 hours of video uploaded every minute.
In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages— say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content.
In 2021, Florida and Texas enacted statutes regulating internet platforms, including the large social-media companies just mentioned … both contain content-moderation provisions, restricting covered platforms’ choices about whether and how to display user-generated content to the public. And both include individualized-explanation provisions, requiring platforms to give reasons for particular content-moderation choices.
Florida’s law regulates “social media platforms,” as defined expansively, that have annual gross revenue of over $100 million or more than 100 million monthly active users. The statute restricts varied ways of “censor[ing]” or otherwise disfavoring posts— including deleting, altering, labeling, or deprioritizing them—based on their content or source. For example, the law prohibits a platform from taking those actions against “a journalistic enterprise based on the content of its publication or broadcast.” Similarly, the law prevents deprioritizing posts by or about political candidates. And the law requires platforms to apply their content-moderation practices to users “in a consistent manner.” …
The Texas law regulates any social-media platform, having over 50 million monthly active users, that allows its users “to communicate with other users for the primary purpose of posting information, comments, messages, or images.” With several exceptions, the statute prevents platforms from “censor[ing]” a user or a user’s expression based on viewpoint. That ban on “censor[ing]” covers any action to “block, ban, remove, deplatform, demonetize, deboost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” The statute also requires that “concurrently with the removal” of user content, the platform shall “notify the user” and “explain the reason the content was removed.” The user gets a right of appeal, and the platform must address an appeal within 14 days…
The Eleventh Circuit upheld the injunction of Florida’s law, as to all provisions relevant here. The court held that the State’s restrictions on content moderation trigger First Amendment scrutiny under this Court’s cases protecting “editorial discretion.” When a social-media platform “removes or deprioritizes a user or post,” the court explained, it makes a “judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination.” The court concluded that the content-moderation provisions are unlikely to survive “intermediate—let alone strict—scrutiny,” because a State has no legitimate interest in counteracting “private ‘censorship’” by “tilt[ing] public debate in a preferred direction.”…
The Fifth Circuit disagreed … In that court’s view, the platforms’ content-moderation activities are “not speech” at all, and so do not implicate the First Amendment … But even if those activities were expressive, the court continued, the State could regulate them to advance its interest in “protecting a diversity of ideas.” … The court further held that the statute’s individualized-explanation provisions would likely survive, again even assuming that the platforms were engaged in speech. Those requirements, the court maintained, are not unduly burdensome under Zauderer because the platforms needed only to “scale up” a “complaint-and-appeal process” they already used…
We granted certiorari to resolve the split between the Fifth and Eleventh Circuits (2023).
II
NetChoice chose to litigate these cases as facial challenges, and that decision comes at a cost. … a plaintiff cannot succeed on a facial challenge unless he “establish[es] that no set of circumstances exists under which the [law] would be valid,” or he shows that the law lacks a “plainly legitimate sweep.” United States v. Salerno (1987). In First Amendment cases, however, this Court has lowered that very high bar. To provide breathing room for free expression, we have substituted a less demanding though still rigorous standard. The question is whether “a substantial number of [the law’s] applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.” …
So far in these cases, no one has paid much attention to that issue. In the lower courts, NetChoice and the States alike treated the laws as having certain heartland applications, and mostly confined their battle to that terrain… in short, they treated these cases more like as-applied claims than like facial ones…
The first step in the proper facial analysis is to assess the state laws’ scope… Starting with Facebook and the other giants: To what extent, if at all, do the laws affect their other services, like direct messaging or events management? And beyond those social-media entities, what do the laws have to say, if anything, about how an email provider like Gmail filters incoming messages, how an online marketplace like Etsy displays customer reviews, how a payment service like Venmo manages friends’ financial exchanges, or how a ride-sharing service like Uber runs? …
The problem for this Court is that it cannot undertake the needed inquiries. “[W]e are a court of review, not of first view.” … The parties have not briefed the critical issues here, and the record is underdeveloped. So we vacate the decisions below and remand these cases…
III
But it is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit. Recall that it held that the content choices the major platforms make for their main feeds are “not speech” at all, so States may regulate them free of the First Amendment’s restraints. And even if those activities were expressive, the court held, Texas’s interest in better balancing the marketplace of ideas would satisfy First Amendment scrutiny. If we said nothing about those views, the court presumably would repeat them when it next considers NetChoice’s challenge. It would thus find that significant applications of the Texas law—and so significant inputs into the appropriate facial analysis—raise no First Amendment difficulties. But that conclusion would rest on a serious misunderstanding of First Amendment precedent and principle. The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. Explaining why that is so will prevent the Fifth Circuit from repeating its errors as to Facebook’s and YouTube’s main feeds…
A
… At bottom, Texas’s law requires the platforms to carry and promote user speech that they would rather discard or downplay. The platforms object that the law thus forces them to alter the content of their expression, a particular edited compilation of third-party speech. That controversy sounds a familiar note. We have repeatedly faced the question whether ordering a party to provide a forum for someone else’s views implicates the First Amendment. And we have repeatedly held that it does so if, though only if, the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt…
The seminal case is Miami Herald Publishing Co. v. Tornillo (1974). There, a Florida law required a newspaper to give a political candidate a right to reply when it published “criticism and attacks on his record.” The Court held the law to violate the First Amendment because it interfered with the newspaper’s “exercise of editorial control and judgment.” Forcing the paper to print what “it would not otherwise print,” the Court explained, “intru[ded] into the function of editors.” For that function was, first and foremost, to make decisions about the “content of the paper” and “[t]he choice of material to go into” it. In protecting that right of editorial control, the Court recognized a possible downside. It noted the access advocates’ view (similar to the States’ view here) that “modern media empires” had gained ever greater capacity to “shape” and even “manipulate popular opinion.” And the Court expressed some sympathy with that diagnosis. But the cure proposed, it concluded, collided with the First Amendment’s antipathy to state manipulation of the speech market. Florida, the Court explained, could not substitute “governmental regulation” for the “crucial process” of editorial choice…
The capstone of those precedents came in Hurley v. IrishAmerican Gay, Lesbian and Bisexual Group of Boston, Inc. (1995), when the Court considered (of all things) a parade. The question was whether Massachusetts could require the organizers of a St. Patrick’s Day parade to admit as a participant a gay and lesbian group seeking to convey a message of “pride.” The Court held unanimously that the First Amendment precluded that compulsion. The “selection of contingents to make a parade,” it explained, is entitled to First Amendment protection, no less than a newspaper’s “presentation of an edited compilation of [other persons’] speech.” And that meant the State could not tell the parade organizers whom to include…
On two other occasions, the Court distinguished Tornillo and its progeny for the flip-side reason—because in those cases the compelled access did not affect the complaining party’s own expression. First, in PruneYard Shopping Center v. Robins (1980), the Court rejected a shopping mall’s First Amendment challenge to a California law requiring it to allow members of the public to distribute handbills on its property. The mall owner did not claim that he (or the mall) was engaged in any expressive activity. Indeed, as the PG&E Court later noted, he “did not even allege that he objected to the content of the pamphlets” passed out at the mall. Similarly, in Rumsfeld v. Forum for Academic and Institutional Rights, Inc., (2006) (FAIR), the Court reiterated that First Amendment claims will not succeed when the entity objecting to hosting third-party speech is not itself engaged in expression. The statute at issue required law schools to allow the military to participate in on-campus recruiting. The Court held that the schools had no First Amendment right to exclude the military based on its hiring policies, because the schools “are not speaking when they host interviews.” Or stated again, with reference to the just-described precedents: Because a “law school’s recruiting services lack the expressive quality of a parade, a newsletter, or the editorial page of a newspaper,” the required “accommodation of a military recruiter” did not “interfere with any message of the school.”
That is a slew of individual cases, so consider three general points to wrap up. Not coincidentally, they will figure in the upcoming discussion of the First Amendment problems the statutes at issue here likely present as to Facebook’s News Feed and similar products.
First, the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude…
Second, none of that changes just because a compiler includes most items and excludes just a few. That was the situation in Hurley…
Third, the government cannot get its way just by asserting an interest in improving, or better balancing, the marketplace of ideas. Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access. But in case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm… However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others.
B
… No one thinks Facebook’s News Feed much resembles an insert put in a billing envelope. And similarly, today’s social media pose dangers not seen earlier: No one ever feared the effects of newspaper opinion pages on adolescents’ mental health. But analogies to old media, even if imperfect, can be useful. And better still as guides to decision are settled principles about freedom of expression, including the ones just described. Those principles have served the Nation well over many years, even as one communications method has given way to another…
The key to the scheme is prioritization of content, achieved through the use of algorithms. Of the billions of posts or videos(plus advertisements) that could wind up on a user’s customized feed or recommendations list, only the tiniest fraction do. The selection and ranking is most often based on a user’s expressed interests and past activities. But it may also be based on more general features of the communication or its creator…
Beyond rankings lie labels. The platforms may attach “warning[s], disclaimers, or general commentary”—for example, informing users that certain content has “not been verified by official sources.” Likewise, they may use “information panels” to give users “context on content relating to topics and news prone to misinformation, as well as context about who submitted the content.” So, for example, YouTube identifies content submitted by state-supported media channels, including those funded by the Russian Government.
But sometimes, the platforms decide, providing more information is not enough; instead, removing a post is the right course. The platforms’ content-moderation policies also say when that is so. Facebook’s Standards, for example, proscribe posts—with exceptions for “newsworth[iness]” and other “public interest value”—in categories and subcategories including: Violence and Criminal Behavior (e.g., violence and incitement, coordinating harm and publicizing crime, fraud and deception); Safety (e.g., suicide and self-injury, sexual exploitation, bullying and harassment); Objectionable Content (e.g., hate speech, violent and graphic content); Integrity and Authenticity (e.g., false news, manipulated media). YouTube’s Guidelines similarly target videos falling within categories like: hate speech, violent or graphic content, child safety, and misinformation (including about elections and vaccines). The platforms thus unabashedly control the content that will appear to users, exercising authority to remove, label or demote messages they disfavor.
Except that Texas’s law limits their power to do so. As noted earlier, the law’s central provision prohibits the large social-media platforms (and maybe other entities) from “censor[ing]” a “user’s expression” based on its “viewpoint.” The law defines “expression” broadly, thus including pretty much anything that might be posted. And it defines “censor” to mean “block, ban, remove, deplatform, demonetize, deboost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” That is a long list of verbs, but it comes down to this: The platforms cannot do any of the things they typically do (on their main feeds) to posts they disapprove—cannot demote, label, or remove them—whenever the action is based on the post’s viewpoint. And what does that “based on viewpoint” requirement entail? Doubtless some of the platforms’ content-moderation practices are based on characteristics of speech other than viewpoint (e.g., on subject matter). But if Texas’s law is enforced, the platforms could not—as they in fact do now—disfavor posts because they:
- support Nazi ideology;
- advocate for terrorism;
- espouse racism, Islamophobia, or anti-Semitism;
- glorify rape or other gender-based violence;
- encourage teenage suicide and self-injury;
- discourage the use of vaccines;
- advise phony treatments for diseases;
- advance false claims of election fraud.
The list could continue for a while. The point of it is not that the speech environment created by Texas’s law is worse than the ones to which the major platforms aspire on their main feeds. The point is just that Texas’s law profoundly alters the platforms’ choices about the views they will, and will not, convey.
And we have time and again held that type of regulation to interfere with protected speech…
That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference… That Facebook and YouTube convey a mass of messages does not license Texas to prohibit them from deleting posts with, say, “hate speech” based on “sexual orientation.” It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.
Similarly, the major social-media platforms do not lose their First Amendment protection just because no one will wrongly attribute to them the views in an individual post. For starters, users may well attribute to the platforms the messages that the posts convey in toto. Those messages—communicated by the feeds as a whole—derive largely from the platforms’ editorial decisions about which posts to remove, label, or demote. And because that is so, the platforms may indeed “own” the overall speech environment… When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
C
And once that much is decided, the interest Texas relies on cannot sustain its law. In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny. But here we need not. Even assuming that the less stringent form of First Amendment review applies, Texas’s law does not pass. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. O’Brien (1968). Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects. But the interest Texas has asserted cannot carry the day: It is very much related to the suppression of free expression, and it is not valid, let alone substantial.
Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of speech that the major social-media platforms present… The law’s main sponsor explained that the “West Coast oligarchs” who ran social-media companies were “silenc[ing] conservative viewpoints and ideas.” The Governor, in signing the legislation, echoed the point: The companies were fomenting a “dangerous movement” to “silence” conservatives.
But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing]” public debate in a preferred direction. It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others… On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana…
The Court’s decisions about editorial control, as discussed earlier, make that point repeatedly. Again, the question those cases had in common was whether the government could force a private speaker, including a compiler and curator of third-party speech, to convey views it disapproved. And in most of those cases, the government defended its regulation as yielding greater balance in the marketplace of ideas… But the Court—in Tornillo, in PG&E, and again in Hurley—held that such interest could not support the government’s effort to alter the speaker’s own expression…
The case here is no different… Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose…
JUSTICE ALITO, concurring in the judgment.
The holding in these cases is narrow: NetChoice failed to prove that the Florida and Texas laws they challenged are facially unconstitutional. Everything else in the opinion of the Court is nonbinding dicta.
I agree with the bottom line of the majority’s central holding. But its description of the Florida and Texas laws, as well as the litigation that shaped the question before us, leaves much to be desired…
… the majority … unreflectively assumes the truth of NetChoice’s unsupported assertion that social-media platforms—which use secret algorithms to review and moderate an almost unimaginable quantity of data today—are just as expressive as the newspaper editors who marked up typescripts in blue pencil 50 years ago…
I
As the Court has recognized, social-media platforms have become the “modern public square.” Packingham v. North Carolina. In just a few years, they have transformed the way in which millions of Americans communicate with family and friends, perform daily chores, conduct business, and learn about and comment on current events. The vast majority of Americans use social media, and the average person spends more than two hours a day on various platforms. Young people now turn primarily to social media to get the news, and for many of them, life without social media is unimaginable. Social media may provide many benefits—but not without drawbacks. For example, some research suggests that social media are having a devastating effect on many young people, leading to depression, isolation, bullying, and intense pressure to endorse the trend or cause of the day.
In light of these trends, platforms and governments have implemented measures to minimize the harms unique to the social-media context. Social-media companies have created user guidelines establishing the kinds of content that users may post and the consequences of violating those guidelines, which often include removing nonconforming posts or restricting noncompliant users’ access to a platform.
Such enforcement decisions can sometimes have serious consequences. Restricting access to social media can impair users’ ability to speak to, learn from, and do business with others. Deleting the account of an elected official or candidate for public office may seriously impair that individual’s efforts to reach constituents or voters, as well as the ability of voters to make a fully informed electoral choice. And what platforms call “content moderation” of the news or user comments on public affairs can have a substantial effect on popular views.
Concerned that social-media platforms could abuse their enormous power, Florida and Texas enacted laws that prohibit them from disfavoring particular viewpoints and speakers. Both statutes have a broad reach, and it is impossible to determine whether they are unconstitutional in all their applications without surveying those applications. The majority, however, provides only a cursory outline of the relevant provisions of these laws and the litigation challenging their constitutionality…
A
1
…
2
Days after S.B. 7072’s enactment, NetChoice filed suit in federal court, alleging that the new law violates the First Amendment in all its applications. As a result, NetChoice asked the District Court to enter a preliminary injunction against any enforcement of any of its provisions before the law took effect.
Florida defended the constitutionality of S.B. 7072. It argued that the law’s prohibition of censorship does not violate the freedom of speech because the First Amendment permits the regulation of the conduct of entities that do not express their own views but simply provide the means for others to communicate.
…
With just one exception, the Eleventh Circuit affirmed. It first held that all the regulated platforms’ decisions about “whether, to what extent, and in what manner to disseminate third-party created content to the public” were constitutionally protected expression. Under that framing, the court found that the moderation and individual disclosure provisions likely failed intermediate scrutiny, obviating the need to determine whether strict scrutiny applied. But the court held that the general-disclosure provisions, which require only that platforms publish their censorship policies, met the intermediate scrutiny standard set forth in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio (1985)…
B
1
Around the same time as the enactment of the Florida law, Texas adopted a similar measure, H.B. 20, which covers “social media platform[s]” with more than 50 million monthly users in the United States…
2
As it did in the Florida case, NetChoice sought a preliminary injunction in federal court, claiming that H.B. 20 violates the First Amendment in its entirety. In response, Texas argued that because H.B. 20 regulates NetChoice’s members “in their operation as publicly accessible conduits for the speech of others” rather than “as authors or editors” of their own speech, NetChoice could not prevail. But even if the platforms might have the right to use algorithms to censor their users’ speech, the State argued, the question of “what these algorithms are doing is a critical, and so far, unexplained, aspect of this case.” This deficiency mattered, Texas contended, because the platforms could succeed on their facial challenge only by showing that “all algorithms used by the Platforms are for the purposes of expressing viewpoints of those Platforms.” And because NetChoice had not even explained what its members’ algorithms did, much less whether they did so in an expressive way, Texas argued that NetChoice had not shown that “all applications of H.B. 20 are unconstitutional.”…
II
NetChoice contends that the Florida and Texas statutes facially violate the First Amendment, meaning that they cannot be applied to anyone at any time under any circumstances without violating the Constitution. Such challenges are strongly disfavored. They often raise the risk of “‘premature interpretatio[n] of statutes’ on the basis of factually barebones records.” They clash with the principle that courts should neither “ ‘anticipate a question of constitutional law in advance of the necessity of deciding it’ ” nor “‘formulate a rule of constitutional law broader than is required by the precise facts to which it is to be applied.’” And they “threaten to short circuit the democratic process by preventing laws embodying the will of the people from being implemented in a manner consistent with the Constitution.” Facial challenges also strain the limits of the federal courts’ constitutional authority to decide only actual “Cases” and “Controversies.” Art. III, §2. “[L]itigants typically lack standing to assert the constitutional rights of third parties.” But when a court holds that a law cannot be enforced against anyone under any circumstances, it effectively grants relief with respect to unknown parties in disputes that have not yet materialized.
For these reasons, we have insisted that parties mounting facial attacks satisfy demanding requirements. In United States v. Salerno (1987), we held that a facial challenger must “establish that no set of circumstances exists under which the [law] would be valid.” “While some Members of the Court have criticized the Salerno formulation,” all have agreed “that a facial challenge must fail where the statute has a ‘plainly legitimate sweep.’” In First Amendment cases, we have sometimes phrased the requirement as an obligation to show that a law “‘prohibits a substantial amount of protected speech’” relative to its ‘“plainly legitimate sweep.’”
NetChoice and the Federal Government urge us not to apply any of these demanding tests because, they say, the States disputed only the “threshold question” whether their laws “cover expressive activity at all.” The Court unanimously rejects that argument—and for good reason…
III
…
A
The First Amendment protects “the freedom of speech,” and most of our cases interpreting this right have involved government efforts to forbid, restrict, or compel a party’s own oral or written expression.
Some cases, however, have involved another aspect of the free speech right, namely, the right to “presen[t] . . . an edited compilation of speech generated by other persons” for the purpose of expressing a particular message. See Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., (1995). As used in this context, the term “compilation” means any effort to present the expression of others in some sort of organized package.
An example such as the famous Oxford Book of English Poetry illustrates why a compilation may constitute expression on the part of the compiler. The editors’ selection of the poems included in this volume expresses their view about the poets and poems that most deserve the attention of their anticipated readers. Forcing the editors to exclude or include a poem could alter the expression that the editors wish to convey.
Not all compilations, however, have this expressive characteristic. Suppose that the head of a neighborhood group prepares a directory consisting of contact information submitted by all the residents who want to be listed. This directory would not include any meaningful expression on the part of the compiler.
Because not all compilers express a message of their own, not all compilations are protected by the First Amendment. Instead, the First Amendment protects only those compilations that are “inherently expressive” in their own right…
To show that a hosting requirement would compel speech and thereby trigger First Amendment scrutiny, a claimant must generally show three things.
1
First, a claimant must establish that its practice is to exercise “editorial discretion in the selection and presentation” of the content it hosts…
Determining whether an entity should be viewed as a “curator” or a “dumb pipe” may not always be easy because different aspects of an entity’s operations may take different approaches with respect to hosting third-party speech. The typical newspaper regulates the content and presentation of articles authored by its employees or others, but that same paper might also run nearly all the classified advertisements it receives, regardless of their content and without adding any expression of its own. These differences may be significant for First Amendment purposes…
2
Second, the host must use the compilation of speech to express “some sort of collective point”—even if only at a fairly abstract level. Thus, a parade organizer who claims a First Amendment right to exclude certain groups or individuals would need to show at least that the message conveyed by the groups or individuals who are allowed to march comport with the parade’s theme…
3
Finally, a compiler must show that its “own message [is] affected by the speech it [is] forced to accommodate.” In core examples of expressive compilations, such as a book containing selected articles, chapters, stories, or poems, this requirement is easily satisfied. But in other situations, it may be hard to identify any message that would be affected by the inclusion of particular third-party speech…
B
A party that challenges government interference with its curation of content cannot win without making the three-part showing just outlined, but such a showing does not guarantee victory. To prevail, the party must go on and show that the challenged regulation of its curation practices violates the applicable level of First Amendment scrutiny…
C
With these standards in mind, I proceed to the question whether the content-moderation provisions are facially valid. For the following three reasons, NetChoice failed to meet its burden…
1
First, NetChoice did not establish which entities the statutes cover…
… NetChoice has not stated whether the challenged laws reach websites like WhatsApp and Gmail, which carry messages instead of curating them to create an independent speech product. Both laws also appear to cover Reddit and BeReal, and websites like Parler, which claim to engage in little or no content moderation at all. And Florida’s law, which is even broader than Texas’s, plainly applies to e-commerce platforms like Etsy that make clear in their terms of service that they are “not a curated marketplace.” In First Amendment terms, this means that these laws— in at least some of their applications—appear to regulate the kind of “passive receptacle[s]” of third-party speech that receive no First Amendment protection. Given such uncertainty, it is impossible for us to determine whether these laws have a “plainly legitimate sweep.”
2
Second, NetChoice has not established what kinds of content appear on all the regulated platforms, and we cannot determine whether these platforms create an “inherently expressive” compilation of third-party speech until we know what is being compiled…
For one thing, the ways in which users post, send direct messages, or interact with content may differ in meaningful ways from platform to platform. And NetChoice’s failure to account for these differences may be decisive. To see how, consider X and Yelp. Both platforms allow users to post comments and photos, but they differ in other respects. X permits users to post (or “Tweet”) on a broad range of topics because its “purpose is to serve the public conversation,” and as a result, many elected officials use X to communicate with constituents. Yelp, by contrast, allows users to post comments and pictures only for the purpose of advertising local businesses or providing “firsthand accounts” that reflect their “consumer experience” with businesses. It does not permit “rants about political ideologies, a business’s employment practices, extraordinary circumstances, or other matters that don’t address the core of the consumer experience.”
As this example shows, X’s content is more political than Yelp’s, and Yelp’s content is more commercial than X’s. That difference may be significant for First Amendment purposes…
3
Third, NetChoice has not established how websites moderate content.NetChoice alleges that “[c]overed websites” generally use algorithms to organize and censor content appearing in search results, comments, or infeeds. But at this stage and on this record, we have no way of confirming whether all of the regulated platforms use algorithms to organize all of their content, much less whether these algorithms are expressive…
Perhaps recognizing this, NetChoice argues in passing that it cannot tell how its members moderate content because doing so would embolden “malicious actors” and divulge “proprietary and closely held” information. But these harms are far from inevitable. Various platforms already make similar disclosures—both voluntarily and to comply with the European Union’s Digital Services Act—yet the sky has not fallen…
D
Although the only question the Court must decide today is whether NetChoice showed that the Florida and Texas laws are facially unconstitutional, much of the majority opinion addresses a different question: whether the Texas law’s content-moderation provisions are constitutional as applied to two features of two platforms—Facebook’s News Feed and YouTube’s homepage. The opinion justifies this discussion on the ground that the Fifth Circuit cannot apply the facial constitutionality test without resolving that question, but that is not necessarily true…
… The majority’s discussion also rests on wholly conclusory assumptions that lack record support.
For example, the majority paints an attractive, though simplistic, picture of what Facebook’s News Feed and YouTube’s homepage do behind the scenes. Taking NetChoice at its word, the majority says that the platforms’ use of algorithms to enforce their community standards is per se expressive. But the platforms have refused to disclose how these algorithms were created and how they actually work. And the majority fails to give any serious consideration to key arguments pressed by the States. Most notable is the majority’s conspicuous failure to address the States’ contention that platforms like YouTube and Facebook—which constitute the 21st century equivalent of the old “public square”—should be viewed as common carriers. Whether or not the Court ultimately accepts that argument, it deserves serious treatment.
Instead of seriously engaging with this and other arguments, the majority rests on NetChoice’s dubious assertion that there is no constitutionally significant difference between what newspaper editors did more than a half-century ago at the time of Tornillo and what Facebook and YouTube do today…
Now consider how newspapers and social-media platforms edit content. Newspaper editors are real human beings, and when the Court decided Tornillo (the case that the majority finds most instructive), editors assigned articles to particular reporters, and copyeditors went over typescript with a blue pencil. The platforms, by contrast, play no role in selecting the billions of texts and videos that users try to convey to each other. And the vast bulk of the “curation” and “content moderation” carried out by platforms is not done by human beings. Instead, algorithms remove a small fraction of nonconforming posts post hoc and prioritize content based on factors that the platforms have not revealed and may not even know. After all, many of the biggest platforms are beginning to use AI algorithms to help them moderate content. And when AI algorithms make a decision, “even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.” Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?
Other questions abound. Maybe we should think about the enormous power exercised by platforms like Facebook and YouTube as a result of “network effects.” And maybe we should think about the unique ways in which social-media platforms influence public thought. To be sure, I do not suggest that we should decide at this time whether the Florida and Texas laws are constitutional as applied to Facebook’s News Feed or YouTube’s homepage. My argument is just the opposite. Such questions should be resolved in the context of an as-applied challenge…
IV
* * *
The only binding holding in these decisions is that NetChoice has yet to prove that the Florida and Texas laws they challenged are facially unconstitutional. Because the majority opinion ventures far beyond the question we must decide, I concur only in the judgment.
Key Takeaways
1. While the challenged laws represent conservative attempts to prevent social media platforms from impacting conservative or MAGA advocacy, in light of the 2024 election, it’s not hard to imagine similar concerns being levied by right-wing social media owners–such as Elon Musk–by liberals. First Amendment issues aside, what do you think of attempts by state governments to treat private social media platforms as public forums? Do the size and reach of these platforms justify such interventions? Or should social media platforms retain their traditional rights, with consumers instead migrating to platforms that better suit their ideals? Should platforms instead be broken up using anti-monopoly laws
2. Given your use of these platforms, would you agree that such platforms have stances or overall viewpoints that justify treating them akin to newspapers or magazines? Which ones? Or do you agree with Texas and Florida that, for the most part, these entities are not analogous to print media
3. How does it matter, if at all, that such content moderation is often performed by algorithms or AI, rather than by human beings?