Big Speech and Free Expression: How Can Social Media Be Regulated?
Kate Klonick is an Associate Professor of Law at St. John’s University Law School, an Affiliate Fellow at the Information Society Project at Yale Law School, and a Nonresident Fellow at the Brookings Institution. Her most recent research focuses on the private governance of online speech, and most recently, her research on the development of Facebook’s Oversight Board was published in the Yale Law Journal and The New Yorker. She received her undergraduate degree from Brown University, her JD from Georgetown, where she remains a Senior Editor for the Georgetown Law Journal Online, and a Ph.D. in Law from Yale Law School.
The interview was conducted in Fall 2022 and has been edited for brevity and clarity.
Harvard Undergraduate Law Review: In one of your recent academic publications on Big Speech, you mention that “technology companies seem omnipotent, omnipresent, and without accountability for their harm to society.” What is Big Speech’s role in this, and what kinds of regulations are currently implemented to limit or enable it?
Kate Klonick: Big Speech was a term I coined to discuss very specifically user generated content platforms and the ways in which we use those to socialize, protest, and communicate with each other. This includes technology companies with speech platforms–YouTube, Twitch, Twitter, Facebook–that control how we intermediate and publish our information online. We’ve got a huge market for advertising, but what we're most upset about is not their monopoly power over advertising, but their monopoly over the speech platform, since those are the main types of platforms that are discussed when people talk about the harms of social media.
HULR: How is the information that companies such as Amazon, Google, Meta, and Snapchat track utilized in governance of user-created content?
KK: One of the things we’ve seen is a large focus on the algorithm, which has proliferated specifically on large speech platforms. For example, the algorithm is utilized in Twitter, Facebook’s news feed, recommendations from YouTube, and obviously TikTok; it contains a huge amount of information and these companies need a way to sort through it. These systems work by tracking the user’s content and the time spent on content to automatically match you with content that you’d be likely to spend time on. There’s a very fine line between what will keep you engaged and perhaps outraged or upset; They want to keep you engaged, but not so upset that you quit the site. Companies also don’t want such outrageous content that advertisers don’t want their ads run with it. We can see this with Kanye being banned by Musk on Twitter; no one wants their ads next to swastikas unless you’re trying to sell your products to Neo-Nazis.
The important thing to keep in mind is that algorithms’ role is to try to predict and match content with the users. We often lose sight of the term ‘users’ in user-generated content platforms and use it as another flat term, like how we have this very loose description of “big tech.” Users have preferences that are shaped by culture, geography, gender, sexuality, and everything that makes you human. On platforms, every user is both a speaker and a listener, so it will inherently always conflict between a speaker who wants to share something, users finding others who want to listen, and the users who don’t want to listen. The algorithm matches these speakers and listeners, which is a constant adjustment and learning process. This is where advanced algorithms and machine learning can come in.
HULR: What is the relationship between platform regulation of user-generated content and the First Amendment right to Freedom of Speech? Is the ability for the private sector to regulate free speech problematic?
KK: Users are protected by the First Amendment, but that’s the government regulating speech, unlike how Facebook decides what you can or cannot say on its platform. The First Amendment also comes into play on the platforms’ rights as publishers – platforms have the first to set their own editorial policies and control what can be said on their private platform. So outside of very few exceptions, such as child pornography, sexual abuse, and other terrorist content, the government cannot regulate what standard of speech companies can publish. That creates a dilemma – this blocks most regulation and makes it extremely difficult to create laws to solve problems. Now, we’re in an interesting situation of markets and norms that are all run by the platform, not the government.
HULR: How does regulation online differ from regulation in traditional press?
KK: It has yet to be tested. On the topic of traditional press, there’s a case called the Miami Herald Publishing Co. v. Tornillo that overturned a Florida law requiring newspapers to offer equal space for political candidates. It provides robust First Amendment protections to media, newspapers, and the press to publish what they want to. There’s been other similar cases in recent years. You’ve seen Zuckerberg and others talking about platforms as media companies, likely a strategic maneuver to assume those platforms have First Amendment protections as wells. However, the Courts still haven’t seen any cases like this, so we don’t know what it's going to look like.
HULR: What reforms do you recommend that we take? Should it be lawmakers in Congress who make these changes, or should the companies themselves do so? Can you help us visualize what a future with this regulation may entail?
KK: Reforms definitely need to be put into place. From everything I’ve studied and worked on, including the Oversight Board, and with what I’ve stated about governance with these platforms, I can say that reforms need to come in the form of building good practices of governance into these private platforms. That involves levels of transparency within multiple stakeholders, surfacing ideas, and dialogue with the public. When you’re bridging private companies having such a large control of rights, like freedom of expression, you need an intermediary body that facilitates some kind of robust and transparent process. We’re seeing this in places like the European Union with the Digital Services Act.
One important thing to keep in mind is that when we’re discussing regulation, it’s important to remember that we’re not just talking about the U.S. These platforms have huge effects on the value of certain markets globally, and something that’s done in the U.S. to regulate or constrain the market here can have ripple effects on how these platforms operate and treat their users in other countries. Being attuned to the transnational nature of these problems is important when there’s millions of people operating between borders and these companies are facilitating all of this communication; there’s not just one space to regulate. It’s like if you were to pass a new speeding limit for the nation vs. passing one in your small town–these decisions have different scopes. This makes solving the problem much harder, and it necessitates that much more care and thoughtfulness is taken into account before deciding on regulation.
HULR: In another academic paper, you explain your experience with helping to develop Facebook’s new Oversight Board. Does the transnational extent of this problem mean government solutions? Why was this board made independent instead of associated with the company itself, or the government?
KK: You're asking a private company voluntarily to give up some part of its power, to crack open the doors. By starting to have a dialogue to have a conversation they’re giving up some power. It is a give and take. Companies don't necessarily want to be in conversation with you. And notoriously, Apple is an alternative to how Facebook has chosen to operate. Apple operates almost like a joke; It's like a North Korea. They don't open the door even a crack, they don't respond to press requests, they don't have any kind of comms presence. They have never been the type to tell us what they know, even when they had the entire controversy of their privacy policies. And so when you're trying to get someone just to voluntarily create something like this, you have to be sensitive to the fact that it’s a constant negotiation. And so I would say that making it independent, was basically the only way that you were going to serve any of the purposes that Facebook needed it to serve. People couldn’t just think it was a scapegoat, and part of that was that it wasn't going to be in the pocket of Facebook and not serving Facebook's needs; To run itself and have its own power. So it's just a process of kind of getting them to open that door and start the conversation.
HULR: What gap in knowledge or transparency have you noticed after working in big tech that is most salient for consumers to know? How can we help mitigate these information asymmetries?
We are in a massively fast period of norm adjustment. Technology is only considered technology until it is normalized. My friend, Professor Molly Brady likes to remind me that indoor plumbing and toilets were new technology until they became standardized and it was regulated that every house had to have indoor plumbing. I bring that up because I think we're in this rapid norm setting period, and there has been an incredible amount of education that the public has gone through. It has only been four years since the first time Mark Zuckerberg testified before Congress. Content moderation was a thing that I had to describe to people. No one knew content was being removed from platforms in 2017 and 2018. Now, this is something that everyone understands what the platforms are doing. Generally speaking, being aware of what's happening, keeping up to date on the news, and educating yourself. Choose your battles wisely. I think one of the things that worried people was the bullish monopoly power of big tech. There were calls for break up and all of these types of things. And they still have tons of problems that we do need to regulate, but using an antitrust approach makes less sense now because the natural cycle of the market very quickly allowed for that to happen. And so there’s certain lessons that I think that we're learning, and likewise certain things that remain even after market disruption, the things that we still think are unequal and unjust about how these platforms operate. Those are the things that we need to continue to press on and make reform to.
HULR: What implications does your research have for tech companies that are seeking to balance user privacy while still ethically utilizing their information? Do you have any advice for said companies?
Generally speaking, the oversight board is by no means perfect. There are tons of flaws. It’s just the least bad option by a lot of measures. But I do think that essentially, that is something that worked and made sense for Facebook and Meta. Other types of places operate in very different types of user generated content. Moderation models, like an oversight board, wouldn't make sense. So Reddit, for example, is not the same type of model. There is much more user driven moderation, like mods existing and a volunteer motivation. An oversight board is not the right type of model for every platform. But what I do think it represents is the idea of having some type of visibility and purchase in a top down centralized private private platform. It effectively has a huge role in shaping public sentiment. And so, having that as the driving theory behind it and not necessarily the institution I think is very important.