Free Speech, Not Free Amplification
Introduction
Modern free speech doctrine was built for a world in which expression spread through direct choice: a speaker spoke, an editor published, and an audience decided whether to listen. Today’s digital public sphere no longer operates on that model. On major platforms, speech is not merely expressed – it is algorithmically ranked, boosted, suppressed, and redirected at scale by opaque machine learning systems optimized for engagement. A single post may reach millions not by persuasion, but by platform design.
This shift has strained traditional First Amendment analysis. Legal debates about online speech often combine two distinct phenomena – user expression and platform amplification – into a singular category of “speech,” leaving courts with an all-or-nothing decision between censorship and systematic inaction. That framing is inherently flawed.
This essay contends that while user-generated expression deserves full constitutional protection, algorithmic amplification should be treated as platform conduct – not speech. Since amplification systems can foreseeably enable incitement, foreign manipulation, and democratic distortion, courts should apply an intermediate-scrutiny legal framework – similar to that of the United States v. O’Brien test – to regulations targeting amplification mechanisms, without infringing on the expressive rights of users.
How Traditional Free Speech Doctrine Works
The crux of this issue lies in the distinction between speech and conduct. The core First Amendment architecture protects the expression of ideas, but does not immunize all conduct that affects speech. Hence, courts routinely distinguish between speech, expressive conduct, and non-expressive conduct with speech effects.
In Brandenburg v. Ohio (1969), the Court held that speech may be punished only if it is directed to inciting, likely to produce, imminent lawless action (Brandenburg v. Ohio 1969). This requires speaker intent, temporal immediacy, and a direct causal chain. The modern mismatch within this
precedent is central to how social media platforms fuel engagement; algorithmic amplification breaks this model since there is no intent, there are delayed effects, and there is probabilistic causation. As a result, there is a key doctrinal mismatch between Brandenburg and modern developments: it cannot capture algorithm-driven scale.
Moving to content, R.A.V. v. City of St. Paul (1992) determined that the government cannot regulate speech based on viewpoint or subject matter (R.A.V. v. City of St. Paul 1992). Accordingly, content-based regulation of online speech is unconstitutional. Consequently, online content must be regulated with amplification mechanics in mind, not ideas. To that end, Ward v. Rock Against Racism (1989) found that the city could regulate sound equipment used at concerts only if the regulation targeted volume and amplification – not message (Ward v. Rock Against Racism 1989). The key language of the case specified that regulation is valid if it is justified without reference to the content, narrowly tailored, and leaves open alternative channels. This is
significant because regulating the reach of loudspeakers is directly analogous to regulating algorithmic reach, demonstrating that courts already accept the regulation of how speech spreads.
Taken together, these cases show that while the First Amendment rigidly protects what is said, it has long permitted neutral regulation of how speech is disseminated – leaving algorithmic amplification squarely within an existing, yet underapplied, doctrinal category.
Why Algorithmic Amplification is not Speech
There are two key rulings that explain why amplification cannot be considered speech. The first of which, United States v. O’Brien (1968), determined that burning draft cards is equivalent to conduct with expressive elements, and the government may regulate this where the law is within constitutional power, furthers an interest unrelated to suppressing expression, and imposes no greater incidental restriction than necessary (United States v. O’Brien 1968). This establishes intermediate scrutiny for conduct affecting speech and creates doctrinal space for regulation without the need for censorship. Hence, it equivalently maps to the regulation of algorithmic amplification without suppressing speech.
Similarly, Rumsfeld v. FAIR (2006), established that facilitating speech is not legally equivalent to speech, determining that law schools were required to host military recruiters and that this was not a violation of free speech (Rumsfeld v. FAIR 2006). The key logic held that hosting someone is not the same as endorsing someone – logically entailing that facilitating speech is not the same
as expressing speech. This is analogous to the issue of algorithmic amplification, as algorithms facilitate distribution of content – which is shown to be conduct and not expression.
Proposed Boundary: Regulating Amplification Under Intermediate Scrutiny
If algorithmic amplification is properly understood as platform conduct rather than protected speech, the appropriate constitutional framework is intermediate scrutiny. Treating amplification as speech would collapse the established distinction between expression and dissemination mechanisms – foreclosing regulation even where substantial democratic interests are implicated. By contrast, intermediate scrutiny has long governed content-neutral regulations that incidentally affect speech, offering a doctrinally coherent approach.
Under the O’Brien framework, regulation of amplification systems is permissible when it advances considerable interest – like election integrity, national security, and public safety – without targeting expression itself. By regulating amplification mechanics rather than content, these measures limit artificially scaled dissemination while preserving users’ ability to speak and access information through various alternate channels.
Counterargument & Rebuttal
A common objection is that algorithmic amplification constitutes editorial judgment entitled to full First Amendment protection. Yet editorial judgment presupposes human discretion, not
automated engagement optimization, and facilitation of speech does not itself become speech (Rumsfeld v. FAIR 2006).
Conclusion
The central challenge of free speech in the digital age is not whether individuals may speak, but whether algorithmic systems that amplify speech at unprecedented scale should be constitutionally insulated from regulation. Existing First Amendment doctrine allows for neutral regulation of dissemination mechanisms, as long as expression itself remains protected. By treating algorithmic amplification as conduct that is subject to intermediate scrutiny, courts can preserve protections for user speech while addressing the democratic and security risks posed by platform-based and algorithm-driven virality. Drawing this boundary modernizes free speech doctrine without weakening its core commitments to liberty.
References
Brandenburg v. Ohio. 1969. 395 U.S. 444.
R.A.V. v. City of St. Paul. 1992. 505 U.S. 377.
Ward v. Rock Against Racism. 1989. 491 U.S. 781.
United States v. O’Brien. 1968. 391 U.S. 367.
Rumsfeld v. Forum for Academic and Institutional Rights, Inc. 2006. 547 U.S. 47.