A First Amendment Fatality

Introduction

Legal scholarship surrounding §230 of the Communications Decency Act (hereinafter CDA) reaches little consensus regarding both the sustained need for the current statutory iteration as well as the constitutional underpinnings of the provision. The statute was a bipartisan amendment to the CDA, suggested by then-representatives Chris Cox (R) and Ron Wyden (D), in response to a number of judicial outcomes having substantial impact on the nascent landscape of the internet. Namely, Cubby, Inc. v. CompuServe (1991) and Stratton Oakmont, Inc. v. Prodigy Services Co. (1995).1

In Cubby, a case decided in the Southern District Court of New York, the plaintiff brought a defamation suit against the host of an online forum, CompuServe, for failure to remove allegedly defamatory content. The court found that CompuServe possessed no liability with regard to the content of its online forums because it “had no more editorial control” than would a “public library, book store, or newsstand.”2 Conversely, Prodigy Service Co., a case decided four years later in the Supreme Court of New York, found Prodigy liable under similar defamation claims, given that Prodigy’s use of automatic moderation software constituted a “conscious choice to gain the benefits of editorial control.”3

Consequently, the holdings of these two cases present diametrically opposed precedent. Taken together, they sent a troubling message to internet providers hosting third-party content. In essence, in order to shield themselves from liability, UCGs made no effort to moderate content.4 This conundrum, sometimes referred to as the “Moderator’s Dilemma,” prompted Cox and Wyden to draft an amendment addressing the issue.5

The two key Section 230 provisions highlighted and elaborated on in the article are as follows: §230(c)(1), which states that “No provider or user of interactive computer services shall be treated as the publisher or speaker of any information provided by an information content provider”’; and §230(c)(2), which states that “No provider or user of an interactive computer service shall be held liable on any action voluntarily taken in good faith to restrict access to material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” The two provisions aimed to free intermediaries to moderate content on their sites while removing the threat of liability, with the first provision generally addressing the problem of under-screening and the second serving to address the issue of over-screening.6 Although it was intended to encourage intermediaries to moderate content, its language and subsequently broad judicial interpretations have made it an indispensable protectorate of free speech.

In the time since its passing, §230 has regularly been lauded as the “champion” of the modern-day internet. Shielding intermediaries from liability, particularly in the realm of defamation, has resulted in the internet budding into a disruptive titan of communication. In the nearly thirty years since its inception, §230 has almost single-handedly aided the creation of social media, review sites, and a litany of other services which have become an integral part of daily life. While the statute offers a myriad of substantive and procedural benefits for fostering online speech, it has recently come under bipartisan scrutiny, especially in light of its particularly broad judicial interpretations as exemplified through the Cubby and Prodigy Services decisions.

Both political players and legal scholars call for the repeal or amendment of §230 on a series of claims, while others believe that it remains a fundamental safeguard for free speech. Even among those who find the statute irrefutably beneficial, the question of its constitutional underpinnings continues. One fact, however, remains: §230, as it currently exists, lies on tenuous ground and the implications of its removal are largely unknown. Its fate, at the moment, remains in political rather than judicial hands. In essence, §230 must first be repealed or revoked before a case can arise that would need judicial oversight and review.

Proponents of Section 230

Staunch advocates provide ample reasoning why the statute should remain. Perhaps the most convincing argument is that of “collateral censorship.” This occurs “when A censors B out of fear that the government will hold A liable for the effects of B’s speech.”7 These scholars fear that in the absence of §230 (and the subsequent exposure to liability for the speech of third-parties), this outcome would prove almost inevitable. Looking towards the practical ramifications, they point first to the fact that content moderation to combat liability is tremendously difficult, and therefore, tremendously costly. The reasons for the high cost are as follows:

First and foremost, laws–particularly regarding defamation–vary from state to state. A lack of federal immunization would require a piecemeal approach which would likely cater to those states having the most limiting speech laws.8 Second, questions of content veracity would pose large investigative costs.9 Third and finally, algorithms, incapable of properly interpreting context 100% of the time, are unsuccessful in executing this task. For instance, Google algorithms will flag content such as “I am a gay woman.”10 This is particularly troubling when the differentiation of politically necessary content (footage of police shootings) and lewd content (gratuitous violence), or satire and defamation, are separated by a thin line often requiring human evaluation. This implies that sans §230, intermediaries would be subject to extraordinary costs deriving from both moderation efforts as well as exposure to litigation.

Types Of Censorship

Other scholars believe that this will lead not only to censorship, but to gross over-censorship. A risk-averse intermediary, especially one whose business does not depend on individual exercise of free-speech (such as network providers), will likely take prophylactic measures to ensure that no content seeps through which potentially exposes them to liability.11 Employees tasked with moderation will “be unlikely to face professional repercussions for playing it safe but would likely face [professional] ramifications for allowing content that later leads to litigation expenses.”12

Moreover, sites will likely resort to removing entire categories of content in the name of expediency and precaution, presumably those categories which divert furthest from the mainstream (i.e., political dissent), “whose shelter from majoritarian suppression is the special concern of the First Amendment.”13 From a financial perspective, one can easily see why intermediaries would be strongly incentivized to take extreme caution when removing content. For intermediaries of any size and financial standing, the constant possibility of litigation would impose an enormous cost burden—both in terms of preemptive measures taken to prevent legal action as well as remedial action to mitigate active threats posed against hosted content. This dynamic would likely result in the removal of extraordinary amounts of content, much of which would probably fail to constitute defamation or libel if brought before the courts.

Regarding collateral censorship, the procedural benefits of §230 are invaluable. Bearing in mind that the cost of litigation alone would likely serve as a strong deterrent to speech, the reasoning is abundantly clear. When claims are brought to the court which show clear indications of §230 protections, the courts often move for early dismissal. The result is widespread mitigation of defendant costs to defeat unmeritorious claims.14 Without this protection, the most economically-sound decision for businesses will be to simply remove the content, rather than face litigation--which is often the desired outcome of the plaintiff. Furthermore, the predictability of §230 litigation is imperative for sustaining internet speech.

Constitutional doctrines, by contrast, “require a discovery process that usually occurs in summary judgment (or later)...meaning [firms] will likely capitulate in legal proceedings rather than face litigation.”15 In decreasing the chances of successful defense by striking-down immunity, defendants will likely settle to mitigate losses (with financial exposure far outweighing individual speech concerns), while opportunistic plaintiffs will presumably bring many more suits.16 Finally, as mentioned earlier, the uniformity of §230 proceedings means that firms will not be inundated by a patchwork of disparate state laws.

Section 230’s Scienter Provision

Another pertinent element of §230’s utility is its disposal of a scienter provision, which brings into judicial consideration a defendant's awareness of the disputed content. In Zeran v. AOL, a defamation case which greatly expanded the interpretational scope of the statute, the court deemed the scienter element immaterial, thereby “extinguish[ing] notice-based liability for online intermediaries.”17 The benefits of this decision are enormous. While the consideration of scienter may seem, at first glance, to be a positive, especially regarding legitimately objectionable (or unprotected) content, the practical reality of the digital age suggests the contrary. Indeed, with cases involving traditional editors, scienter considerations have proven a valuable measure in balancing state and individual interests. But in the context of an internet landscape which handles exponentially more content volume, the application of a scienter provision generally results in a “notice-and-takedown regime,” where highly permissible content is targeted by those with personal interests in removing it.18 It is not hard to imagine an alternative where every unflattering Yelp review, exposé, or photo became subject to litigation, invariably tipping the scales towards those with the means to pursue legal action. Keeping in mind the intermediary's deference to their bottom line (as opposed to principled free speech concerns) and the unique dynamics mentioned thus far, one does not have to think long or hard to envision a legal environment giving way to “haphazard takedowns” and collateral censorship.

Innovation and Antitrust Concerns

A final consideration in favor of §230 circles innovation and antitrust concerns. The existence of §230 has allowed countless online enterprises to thrive, which, under persistent threat of legal action (meritorious or not), would likely not have. Review sites and social media are extremely important, but one must also consider the litany of blogs and small-scale online publications which have proven to be central to democratic discourse. While larger, more established companies may have revenue streams capable of handling litigative onslaughts, “for smaller internet services, defending a single protracted lawsuit may be financially ruinous.”19 This presents serious threats to the future of innovation, not to mention threatening implications for an industry which is already dangerously consolidated.

In sum, these authors pose the potential demise of section §230 as a mortal blow to free speech. The internet, they say, has changed the way people communicate, but the current iteration that people have come to know and rely upon is not guaranteed. In fact, it is a direct result of §230 (intentional or otherwise) and could look dramatically different. To quote Claudia Haupt: “Section 230 is potentially in jeopardy of becoming a victim of its own success, because its benefits are less salient than the particular instances of defamation.”20

Proponents of Repealing Section 230

This growing sentiment for repeal or amendment is increasingly audible, echoing from scholars, politicians, and citizens alike. Politicians and the electorate hailing from both ends of the political spectrum call out for a common end—the demise of §230—yet their rationales are diametrically opposed. For the right, concerns largely circle around alleged censorship of conservative voices. Intermediaries, they say, particularly social media titans, indiscriminately stifle conservative content and infringe on their First Amendment rights. Most famously, Donald Trump was banned from numerous platforms, resulting in a maelstrom of largely unsuccessful legal action. It’s worth noting that there is almost no objective evidence of such activity, in fact, conservative content generally garners the most engagement day-after-day. Nonetheless, the allegations remain. For these conservative politicians and their constituents, the responsible culprit is §230 (c)(2), which grants intermediaries the right to moderate content without fear of legal retribution.

From The Left and The Right

From the left, complaints are regularly lodged citing the prevalence of hate speech and disinformation on the web, and the moral hazard presented by the profit motive in preserving such speech, which statistically tends to create higher rates of engagement. The object of their disdain is §230(c)(1), which immunizes intermediaries from liability for the content of third parties hosted on their platforms, whether or not said content is constitutionally protected. To be sure, much of the language which the left seeks to temper is, although potentially distasteful, constitutionally permissible. Such efforts motivate the exact concerns raised by pro-§230 advocates. Right or left, the discovery of common ground presents a daunting possibility—the mounting political pressure to dismantle may reach critical mass, thus diminishing the flimsy piece of legislation which currently scaffolds the modern internet.

Academic dissidents such as Danielle Citron provide an in-depth view of why §230 should be amended. Granting the enormous benefits of the statute, they claim that the overbroad interpretation simply extends too far, often having dreadful consequences. Although the Supreme Court has never officially weighed in on §230, lower courts have essentially reached a universal agreement that §230 should be construed broadly, often citing free speech claims in their reasoning. The result has been wide-scale immunization for nearly every internet provider, offering protection from liability to enterprises having little to do with speech and enabling them to engage in conduct that, according to Citron, would be impermissible in a physical forum.21 The statute, she says, has been largely interpreted with scant consideration of the Congressional intent. Recalling the initial purpose of the CDA, she reminds us that the legislative intent was to create a safer internet with minimal access to “lewd and lascivious content.” §230, as it was presumably intended, was supposed to extend to intermediaries the ability to remove and moderate content without fearing exposure to liability for exercising editorial control. Citron offers numerous examples of “bad samaritans” that have directly benefited from §230.

Importance of backpages.com

Perhaps most notable is backpages.com, a web service dedicated to facilitating illicit sexual transactions. Despite taking obvious measures to veil the practice from detection, such as anonymizing communications of users and stripping identifiable metadata from pictures hosted on the site, backpage still “enjoyed broad immunity from liability arising from user-generated content.”22 While the facilitation of these services would entail legal culpability in nearly any other setting, the judicial interpretation of §230, arising from a plain-text reading of the statute, has shielded such unsavory players.

Many other enterprises of similar nature have yielded consistent results. Citron, Wittes, Keats, and other advocates for change insist that while “the internet [may have] needed a broad liability shield when it was young, it certainly needs it no longer.” They suggest an amendment to the current provision which stipulates that intermediaries be immunized from liability “provided they take reasonable steps to prevent or address unlawful uses.”23 Notably, the current provision is somewhat ambiguous, so advocates propose an amendment to narrow its scope. Such an interpretation would necessarily reinvigorate a scienter provision and, although potentially disemboldening some of the worst players, expose countless others to potential liability and prohibitively exorbitant moderation costs. Furthermore, this reliance on anecdotal deviants seems to disregard the larger gift of §230. Like any liberty, including those fundamental rights such freedom of speech, malevolent exercise is far easier to spot, but is microscopic relative to the often for-granted benefits.

Classification of First Amendment Entities

Regardless of §230’s societal value, the paradigmatic shift to a world of ubiquitous digital communications leaves a lingering question in the face of its potential demise: what, if any of the substantive privileges offered by the statute can be explicated from the First Amendment? Even among strong advocates, the extent to which the statute is encapsulated in the amendment stokes contentious debate. To better understand this idea, it is prudent to analyze the most pertinent doctrinal analogies. Classifications of first amendment entities have evolved based on the nature of the entity as well as the contextual circumstances, “with each analogous understanding necessitating different rights and obligations.”24

State Actors

The first doctrinal categorization is “state actors” (usually government entities), who, under long-standing jurisprudence, are the only entities subject to affirmative obligations under the first amendment. In other words, while the First Amendment generally stipulates what cannot be done to a given entity regarding speech, in the case of state actors, it dictates what they cannot do--specifically, they cannot compel speech or indiscriminately stifle free exercise of the right. There have been cases in which private enterprises have been categorized as state actors. In Marsh v. Alabama, the court considered a case wherein a Jehovah’s Witness was arrested for criminal trespass in a company town (an entire town owned by a single private company). The Court found that “except for ownership by a corporation, [the town] had all the characteristics of any other American town,” and as such, was functionally equivalent to a state actor and thus obligated to guarantee First Amendment protections.

Public Functions

Subsequent cases refined the concept of “public functions,” including Amalgamated Food Employees Union v. Logan Valley Plaza, which held a shopping mall to be the equivalent of the company town in Marsh, and Hudgen v. NLRB, which stated that “in order to qualify as performing a public function, a business must actually be doing a job normally done by the government.” Although Klonick suggests that multinational intermediaries are, in many ways, congruent to systems of governance, the classification of intermediaries as state actors appears to be a losing battle. However, the possibility was somewhat revitalized in the landmark Packingham v. North Carolina case, wherein the court struck down a statute barring registered sex offenders from platforms like Facebook, reasoning that excluding these people “from access to social media altogether is to prevent the user from engaging in the legitimate exercise of First Amendment rights.”25 While Packingham may have indicated the Supreme Court’s acknowledgement that social media is vital to modern speech, the case centered on a state action and offers little insight with respect to the rights and obligations conferred upon the intermediaries themselves.

Broadcasters

Another potential avenue of doctrinal analogy is that of “Broadcasters.” Online intermediaries, as general disseminators of information, present similar First Amendment implications to broadcasting services. In the seminal case regarding this issue, Turner Broadcasting v. FCC, the Court found that “must-carry provisions,” which mandated that broadcasters present a diverse range of issues and reserve airspace for local networks, were within constitutional bounds given the relative scarcity of broadcasting frequencies and the high potential for censorial abuse.26

Yet in Reno v. ACLU, the Court differentiated online intermediaries from broadcasters, noting that internet providers “lack scarcity [of frequencies], invasiveness, and a history of government regulation.” While this may have been true when Reno was decided in 1997, the internet has certainly evolved to better meet these conditions (the first two in particular).27 Networking effects and monopolistic practices have drastically consolidated the landscape for online communications, and one can hardly call the internet’s role in our modern lives anything short of invasive. However, if such a concept were extended to capture internet service providers (ISP’s), online intermediaries would necessarily face affirmative obligations and could potentially lose a fair degree of editorial control–a troubling outcome, indeed.

Publication Editors

A final conception presenting some circumstantial parallels is doctrinal analogy pertaining to “publication editors.” Under First Amendment doctrine, editors and their press publications are thought to play such an important role in the facilitation of speech that they are generally protected from state interference.28 This avenue of jurisprudence was crystallized in Miami Herald v. Tornillo, which deemed a Florida statute guaranteeing political candidates a “right to reply” unconstitutional under the Free Press clause. “As internet intermediaries have become more vital to speech, First Amendment advocates have urged courts to apply the holding of Tornillo to platforms, granting them their own speech rights.”29 While this might grant intermediaries a certain amount of latitude with regard to content, it would likely still expose them to liability for unprotected speech posted by third-party users.

New York Times v. Sullivan’s Role

This problem, however, was somewhat mitigated by the ruling of New York Times v. Sullivan, wherein the Court overturned a libel suit brought against the New York Times for publishing an advertisement that was factually inaccurate, noting that such a burden would likely “deter [editors] from permitting certain content...even though it is believed to be true and even though it is in fact true, because of doubt whether if can be proved in court of fear of the expense of having to do so.”30 Legal journalist Anthony Lewis writes about the decision in great depth in his book Make No Law. He emphasizes how the Sullivan case changed the landscape of libel in America.31

Although this ruling would theoretically shield intermediaries from the missteps of third-party content, the Sullivan Court added a scienter element to their consideration, stating that plaintiffs must “show that [the statement] was made with knowledge or reckless disregard for its falsity” (Oyez). Unfortunately, under these pretenses, intermediaries would likely be exposed to brute censorship imposed by “notice-based liability,” a possibility that was discarded with the Zeran ruling. Because editors of print publications handle exponentially less content-volume and have much greater interest (economically and on principle) in the rights of individual speakers, they are far more likely to defend and investigate alleged infractions than their digital counterparts, who would likely acquiesce in a “notice-and-takedown regime.”32

Proxy Censorship

Though it may seem that current judicial doctrine is somewhat ill-fitting with respect to the privileges of §230 (which might suggest the necessity for entirely novel doctrine), scholars like Seth Kreimer say that this isn’t necessarily so. The precedent, he says, may not be found by seeking parallels in the medium, but rather in parallels to the impediments to speech. Much like the idea of “collateral censorship,” Kreimer posits a phenomenon which he calls “censorship by proxy.” The basic idea is this: “Rather than attacking speakers or listeners directly, [entities often seek] to enlist private actors as proxy censors to control the flow of information.” 33

As illustrated in the discussion on collateral censorship, exposing intermediaries to liability might have a major, albeit inadvertent, effect on speech. This dynamic, he explains, is nothing new. It has been the result of deliberate action in this country’s history, and has resulted in judicial rulings which may provide some guidance. The key element of proxy censorship is finding the weak-link in the communicative chain, usually the entity that has the least to lose by acquiescing to the desired change or is most prone to influence. In the case of the internet, this might be network providers or particularly vulnerable platforms. Today, both domestically and internationally, this tactic is being used by interested government parties to curb speech, and not just in authoritarian regimes.

Understanding the potential for proxy censorship in the digital era requires two important considerations. First, the networks of the internet are composed of a series of electronic links, meaning that target proxies can be found anywhere along that line in an attempt to eradicate certain information. “Upstream” providers–particularly those who offer the services to individual websites–have far less to gain from hosting specific content and are thus far more likely to acquiesce to censorship demands. The marginal benefit of hosting a website, particularly those smaller sites which may be imperative to political discourse, is generally far outweighed by the consequential impact of being targeted by monied interests. Second, the endless amount of information contained on the internet means that the absence of any particular content is almost impossible to notice. More to the point, intermediaries have no obligations to notify end-users of censored content.

Historical Roots of Proxy Censorship

Proxy censorship and the Court doctrine which followed finds its historical roots in McCarthy-ism and the domestic war against communist influence. After WWI, constitutional doctrine began to evolve to better protect speakers and listeners from interference and criminal punishment by the government. As a result, the federal government resorted to more indirect methods of censoring speech and publications it regarded as ‘dangerous.’34 Congressional boards (such as the “House Un-American Activities Committee) deemed that although “Congress does not have the power to deny citizens the right to believe in, teach, or advocate communism, fascism, or nazism, it does have the right to focus the spotlight of publicity upon their activities.”35

Since direct censorship was rendered constitutionally impermissible, government attention became focused on “mobilizing private sanctions that were either constitutionally protected or practically immune to prosecution.”36 Perhaps the most famous example is the McCarthy blacklists containing the names of alleged communist sympathizers. These efforts would often result in non-official sanctions and ostracism in the personal and professional lives of individuals, which not only quelled the speech of the charged individual but also sent an ominous message to the general public which spawned an era of self-censorship. Similar methods were employed in the south during the rise of the Civil Rights Movement.37

During the 1950’s and 60’s, the court responded to this phenomenon through a series of cases involving subtle censorship and indirect sanctions. In NAACP v. Alabama, the Court noted that “abridgement of such rights, even though unintended, may inevitably follow from varied forms of government intervention” and further stating in Bates v. City of Little Rock that rights under the First Amendment are “protected not only against heavy-handed frontal attack, but also from being stifled by more subtle government interference.”38 Taken together, these cases indicated that even methods of indirect censorship could suffice as constitutional infraction given their “chilling effect” on speech. 39

Unfortunately, this line of jurisprudence also offers little in terms of conceptualizing or predicting the rights of intermediaries under the First Amendment. Much like Packingham, these cases provided a limited framework for navigating a post-§230 era given that they primarily involve actions of the State. While state censorship is certainly a concern if §230 were to be repealed, civil litigation (primarily libel and defamation) poses a far more daunting threat to the current internet landscape.

Conclusion and An Agenda For Future Research

Our entire discussion thus far leads to a somewhat troubling conclusion. It would appear that there is no line of doctrine which satisfies this dilemma. In every case where editorial control is acknowledged, liability is also extended to the content host. No constitutional conception can provide the protections offered by section §230, namely the ability to regulate content while remaining shielded from an onslaught of civil liability which would drastically transform the modern internet. In some sense, §230 can be blamed for this. Given the tremendous procedural benefits of the statute, the United States Supreme Court has never had to formulate a concept which might protect speech on the internet, but in the current political climate, those days might be limited.

While it is outside the scope of this paper to craft a legal formulation which might guide or satisfy the Court, the goal of this article is to emphasize the looming threat to all forms of discourse currently bubbling through the electorate and the halls of Congress. Though the problem can conveniently be conveyed as a tremendous lack of accountability for tech giants and unsavory players, the practical implications of amending or repealing §230 usher in a future which would dampen or eradicate some of the most important forms of communication available to us today. Equally alarming, some of the services which people have come to know and rely upon–the review sites, blogs, etc.–could vanish. Most dire of all, the potential consequences to essential social and political discourse could be fatal to free speech. Consider, for instance, the #MeToo movement, wherein countless women spoke out against high-powered (and often wealthy) sexual aggressors. Were the content-hosts liable to defamation suits, it is easy to envision a scenario where such content was quickly removed due to threats of litigation. In a similar vein, countless smaller websites or purveyors of political dissidence could be easily silenced in the era of a “notice-and-takedown” regime. These under spoken entities are, after all, who the First Amendment was intended to protect.

Though the desire to live in a world where harmful content could be reliably removed from the public discourse, the current landscape–legally, technologically, and politically–makes the task nearly impossible. At least, not without massively handicapping one of the most beneficial platforms for communication humans have ever created. As with any liberty, its shortcomings or improper uses often garner far more public attention than the privilege which it offers. However, unlike those constitutional liberties which can only be amended through a rigorous political process, the flimsy legislative statute which scaffolds the entire digital landscape can be easily amended or repealed–and likely will. If, or when, that day comes, the Court will have no choice but to weigh in on the unique status of these intermediaries, who unlike the novel mediums which preceded them (television, newspapers, etc.) will bring a unique conundrum. One which requests both the power of editorial control and exemption from liability for the content they host.

Perhaps the Court will follow its own historical in bringing legislative measures under the protection of the Constitution through Judicial review. Just as the “fair-use” doctrine found a home in the First Amendment, the Civil and Voting Rights laws found a home in the Fourteenth Amendment, and a litany of civil and administrative laws gave way to modern due process, so too might the specific function of §230 find itself crystallized under the First. Unfortunately, to get there, §230 must first be revoked and the Supreme Court must agree to hear such a case. In the meantime, we wait.


References

1 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/.

2 Haupt, Claudia E., and Jack M. Balkin. “Section 230 as First Amendment Rule.” Harvard Law Review, May 10, 2018. https://harvardlawreview.org/2018/05/section-230-as-first-amendment-rule/.

3 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018.

4 Ibid.

5 Haupt, Claudia E., and Jack M. Balkin. “Section 230 as First Amendment Rule.” Harvard Law Review, May 10, 2018.

6 Citron, Danielle K. & Benjamin Wittes. “The Problem Isn't Just Backpage: Revising Section 230 Immunity.” Georgetown Law Technology Review, July 2018. https://scholarship.law.bu.edu/faculty_scholarship/631.

7 Jack M. Balkin The Future of Free Expression in a Digital Age, 36 Pepp. L. Rev. Iss. 2 (2009) Available at: https://digitalcommons.pepperdine.edu/plr/vol36/iss2/9

8 Goldman, Eric. “Why Section 230 Is Better than the First Amendment.” SSRN Electronic Journal, November 2019. https://doi.org/10.2139/ssrn.3351323.

9 Haupt, Claudia E., and Jack M. Balkin. “Section 230 as First Amendment Rule.” Harvard Law Review, May 10, 2018.

10 Ibid.

11 Kreimer, Seth F. “Censorship by Proxy: The First AMENDMENT, INTERNET Intermediaries, and the Problem of the Weakest Link.” University of Pennsylvania Law Review 155, no. 1 (2006): 11. https://doi.org/10.2307/40041302.

12 Haupt, Claudia E., and Jack M. Balkin. “Section 230 as First Amendment Rule.” Harvard Law Review, May 10, 2018.

13 Kreimer, Seth F. “Censorship by Proxy: The First AMENDMENT, INTERNET Intermediaries, and the Problem of the Weakest Link.” University of Pennsylvania Law Review 155, no. 1 (2006): 11. https://doi.org/10.2307/40041302.

14 Goldman, Eric. “Why Section 230 Is Better than the First Amendment.” SSRN Electronic Journal, November 2019. https://doi.org/10.2139/ssrn.3351323.

15 Ibid.

16 Ibid.

17 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018.

18 Goldman, Eric. “An Overview of the United States’ Section 230 Internet Immunity.” Oxford Handbook of Online Intermediary Liability, 2020, 153–71. https://doi.org/10.1093/oxfordhb/9780198837138.013.8.

19 Goldman, Eric. “Why Section 230 Is Better than the First Amendment.” SSRN Electronic Journal, November 2019. https://doi.org/10.2139/ssrn.3351323.

20 Haupt, Claudia E., and Jack M. Balkin. “Section 230 as First Amendment Rule.” Harvard Law Review, May 10, 2018.

21 Citron, Danielle K. and Franks, Mary Anne. "The Internet as a Speech Machine and Other Myths Confounding Section 230 Reform." University of Chicago Legal Forum, 2020. https://chicagounbound.uchicago.edu/uclf/vol2020/iss1/3.

22 Citron, Danielle K. and Benjamin Wittes. “The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity” 86 Fordham Law Review, 2017, 401. https://ir.lawnet.fordham.edu/flr/vol86/iss2/3.

23 Ibid.

24 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/.

25 Ibid.

26 Kreimer, Seth F. “Censorship by Proxy: The First AMENDMENT, INTERNET Intermediaries, and the Problem of the Weakest Link.” University of Pennsylvania Law Review 155, no. 1 (2006): 11. https://doi.org/10.2307/40041302.

27 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/.

28 Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review, April 10, 2018. https://harvardlawreview.org/2018/04/the-new-governors-the-people-rules-and-processes-governing-online-speech/.

29 Ibid.

30 Ibid.

31 Lewis, Anthony. Make No Law. Random House, 1991.

32 Goldman, Eric. “Why Section 230 Is Better than the First Amendment.” SSRN Electronic Journal, November 2019. https://doi.org/10.2139/ssrn.3351323.

33 Kreimer, Seth F. “Censorship by Proxy: The First AMENDMENT, INTERNET Intermediaries, and the Problem of the Weakest Link.” University of Pennsylvania Law Review 155, no. 1 (2006): 11. https://doi.org/10.2307/40041302.

34 Ibid.

35 Ibid.

36 Ibid.

37 Ibid.

38 Ibid.

39 Vincent Blasi, The Pathological Perspective and the First Amendment, 85 COLUM. L. REV. 449, 482 (1985)

Megan Cistulli and Ty Stillerman

Megan Cistulli, a senior at the University of California, Berkeley, is earning a Bachelor of Arts in Political Science with an emphasis in International Relations and American Politics while minoring in Human Rights.

Ty Stillerman is a senior at the University of California, Berkeley, pursuing a degree in Political Economy.

Previous
Previous

Intersectional Interpretation and the Refugee Convention