❮ Return to Our Work

Op-Ed | The Sling | April 6, 2026

The First Amendment Does Not Grant Carte Blanche to Social Media: A Reply to David French

Economic MediaMedia AccountabilityTech
The First Amendment Does Not Grant Carte Blanche to Social Media: A Reply to David French

This article initially ran in The Sling. Read the original here.


Big Tech is reeling from a pair of recent trial court decisions holding Facebook parent company Meta liable for creating an addictive, abusive service that substantially resulted in adverse mental health impacts for the plaintiffs. In one case, the New Mexico attorney general prevailed in a $375 million lawsuit over Meta’s facilitation of child sexual exploitation. In the other, a California jury found Meta helped to cause a young woman’s thoughts of self-harm and body-image issues. 

After years of social media companies running amok—helping to abet genocide, skew elections, and breed radical political polarization—these cases offer many a glimmer of hope that the law might still bind our age’s mega-corporations. And, right on cue, The New York Times opinion section’s resident scold David French arrived at the scene to fret about how the decisions might lead to “open season on the platforms.” 

French’s argument, at least when it is not merely aping the somewhat more thoughtful Mike Masnick at Techdirt, is woefully incomplete. French does not even consider the evidence and arguments in the actual trial—prefering to speak in broad flourishes—and advances an articulation of free speech so expansive that it could potentially preclude any regulation of online platforms at all. First Amendment considerations are important and justify serious debate, but they cannot simply exempt Big Tech from all other legal considerations, including countervailing standards of protected speech and product liability. That speech is involved does not short circuit all other legal considerations; every invocation of “expression” does not magically preclude every other law on the book from applying. 

From digital town square to editor

Let’s begin with the legal questions that French advances. French’s entire argument is that (a) the speech being disseminated through social media is legal and so (b) any product liability theory is an infringement on the social media sites’ First Amendment rights. 

That, though, is quite debatable. The speech being disseminated is actually only mostly legal; it also frequently includes fraud, incitement to violence, and defamatory speech. Last year, a Reuters’ investigation uncovered that Meta actually internally sanctions a degree of fraud in service of hitting their own revenue goals. The platforms are generally not considered responsible, however, for the dissemination of speech that exists outside of the First Amendment’s protections because of a landmark 1996 law that defines online platforms as distinct from publishers. 

French somehow manages to make it through his entire column without so much as a reference to Section 230 of the Communications Decency Act of 1996, the single most important legal authority for defenders of social media seeking to argue on free speech grounds. Section 230 establishes that forums and other online hosting platforms are not a type of publisher. Before 1996, any attempt at moderation opened online platforms to claims that they were acting as publishers, making them responsible for the content that they disseminate. So the CDA of 1996 included Section 230 to avoid punishing companies for attempting to police the content on their platforms and encourage a degree of oversight. 

The inherent contradiction arises when companies do actively choose to act as publishers. When Elon Musk’s charred remnants of Twitter opt to boost right-wing talking points and suppress left-wing accounts, it is actively promoting some content over others based on the company’s own preferences, rather than a content-neutral distribution of all views that meet the firm’s guidelines. So the real question is whether social media companies are functioning as the “modern-day town square” or instead are engaging in an editorial role. And the obvious answer is that they are largely doing both. 

There are reasonable arguments about how the internet is reliant on Section 230 to exist in the shape that we are all familiar with, and about how product liability-based theories of harm like the ones articulated in Meta’s recent losses could be an end-run around its protections. Yet French makes no real attempt to make those points at all. Rather, he opts for the most tedious, generic type of “free speech” deflection, merely invoking the First Amendment without thinking through how it might work at all.

The argument from Masnick that French cites does include a hint of this analytical work. There are valid points around the burden of proof used, whether this functionally neuters Section 230, and whether there is a standard of product liability that can be counterbalanced against free speech rights. But French uses the First Amendment as a fig leaf to avoid arguing the decision on the merits. In particular, there are multiple points where he outlines exceptions to freedom of expression but gives literally zero thought to whether Meta’s actions might fall into any of these buckets. In particular, if Meta’s algorithm is designed specifically to boost speech that engages in true threats or defamation, then it could be held liable under those circumstances. In the New Mexico case, Meta was explicitly being sued for distributing child sexual abuse materials, one of the exceptions French cites. 

When speech can be regulated

At the core of this issue are two different legal standards surrounding speech on social media, sometimes complimentary and sometimes contradictory: (1) a standard of editorial discretion, spelled out in Moody v. NetChoice, where an industry trade association unsuccessfully argued that under Section 230, the internet platforms it represented could not be regulated by Texas and Florida laws for biased moderation practices, and (2) the protections for content hosting articulated in Section 230. The former, a 9-0 decision, ruled that curation must be understood as an editorial exercise, while the latter establishes that content moderation itself is not an editorial exercise of the platform’s speech. Section 230 means if I post something defamatory on Bluesky, for example, the defamed party cannot sue Bluesky for hosting it.

This creates an obvious grey area around where types of promotion and design decisions cross over from moderation to editorial functions. And there are important and interesting discussions to be had there! 

Yet French has nothing much to say about that, opting instead to gloss over the balancing act that the law is doing. 

There are also important legal questions about how far the dissemination mechanisms for speech are shielded by virtue of their association with protected speech. For instance, a newspaper boy who breaks your window by throwing a newspaper through it cannot simply invoke the fact that they were delivering legally protected speech to avoid liability. Likewise, Meta cannot create a system that harms users and be exempt from basic consumer protection and product liability standards. 

The catch is that Meta’s delivery system is colorably harmful because of the effect of the speech being delivered; it may be that the information distribution itself is more than most people can handle. 

Here’s the crux of French’s argument, again mimicking Masnick: 

A social media site isn’t a bottle of alcohol or a cigarette. It’s not delivering a drug. It’s delivering speech. Sometimes that speech is silly and harmless. Sometimes it is toxic and harmful. Sometimes it’s educational or inspiring. But it’s all speech, and in America speech traditionally can only be blocked, censored or regulated in the narrowest of circumstances. 

That is, strictly speaking, not true. We regulate speech all the time: RICO and conspiracy provisions, contractual restrictions on expression, fraud, defamation, intellectual property protections, and many many more limitations exist. The First Amendment is not a get-out-of-jail-free card.

For reference, here is the corresponding quote from Masnick: 

Lots of people (including related to both these cases) keep comparing social media to things like cigarettes or lead paint. But, as we’ve discussed, that’s a horrible comparison. Cigarettes cause cancer regardless of what else is happening in a smoker’s life. Lead paint causes neurological damage regardless of a child’s home environment. Social media is not like that. The relationship between social media use and mental health outcomes is complex, highly individual, and mediated by dozens of confounding factors that researchers are still trying to untangle. And, also, neither cigarettes nor lead paint are speech. The issues involving social media are all about speech. And yes, speech can be powerful. It can both delight and offend. It can make people feel wonderful or horrible. But we protect speech, in part, because it’s so powerful. 

Many smokers don’t get cancer, actually. And I, despite not smoking, got a rare type of cancer in my lung in my mid-20s. All of these harms are about increasing the probability of bad outcomes, not single-handedly causing them. Cancer risk is also mediated by dozens of “confounding factors that researchers are still trying to untangle” including genetics, immune system health, age, and exposure to all manner of things.

Notifications, the algorithm, and other design choices are different from content

Social media is certainly a different beast than alcohol or cigarettes or lead paint, but the analogy misses how the use of social media is distinct from the content of social media. Neither Masnick nor French engage with specifics from the case that are content-agnostic, of which there are several. Both insist that the design features being litigated are necessarily a type of second-order expression because they depend on the entertainment value of the underlying content. But there are a number of issues with that argument.

For one, it simply ignores examples of practices designed to foster a behavioral addiction that are not direct mechanisms of content distribution. For instance, the plaintiff focused significantly on the role of notifications. The plaintiff in KGM v. Meta reportedly received notifications throughout the day from Instagram and Facebook, giving her a “rush” and inducing her to seek bathroom breaks to check them. 

Similarly, a targeted algorithm is a design choice that has addictive properties irrespective of the underlying content. Although better content makes the platform more addictive, the design features around recommendations and endless scrolling have addictive properties all on their own. Saying that the algorithm can’t be considered addictive because it relies on a functional layer of content is like saying that alcohol can’t be considered addictive on its own because it relies on a glass to be consumed.

Moreover, while the algorithm itself has some measure of protection as editorial speech under Moody v. NetChoice, if that protection owes to the editorial standard, it opens the door for holding the companies directly accountable for the harms of their speech, including addiction. 

This is actually a point French directly agrees with: 

For example, just two years ago, I wrote in defense of a federal appellate court decision holding that TikTok was potentially liable for algorithmically suggesting the so-called blackout challenge to a 10-year-old girl who later tried the challenge (which involves voluntarily choking yourself) and died. In that case, TikTok’s algorithm proactively suggested the challenge to the young girl. She did not search for it. As I argued at the time, TikTok should be treated in the same way that we’d treat an adult who urged a child to try a potentially fatal activity. But that’s not what the California case was about. In that case, the fundamental argument was that the design caused an addiction, not that specific speech caused direct harm.

Exactly! TikTok, Meta, and other social media companies should be held responsible for the legal consequences of design functions in the same way that other parties are held responsible for editorial decisions. And that is what the cases are doing, working out how far free speech for design choice extends and then, where it is a matter of editorial expression rather than broad content-dissemination, holding the platforms accountable using general product liability law.

The First Amendment simply cannot be understood as granting carte blanche to any publishing distribution function as long as it delivers speech. While film studios have a right for their art not to be banned, they don’t have a right to lace the popcorn with cocaine to keep you coming to the theater.

Image credit: “Mark Zuckerberg” by Alessio Jacona is licensed under CC BY-SA 2.0.

Economic MediaMedia AccountabilityTech

More articles by Dylan Gyauch-Lewis

❮ Return to Our Work