Is Section 230 necessary? Does it harm free speech? Does it enable misinformation? How should we change it, if at all?
Explore all perspectives, stances, and arguments for and against Section 230 and the regulation of online platforms with AllStances™ by AllSides.
Section 230 of the Communications Decency Act (CDA) protects online companies from liability arising from what is posted on their platforms. It allows web hosts, such as Twitter and Facebook, to allow its users to post practically any content without making the web host liable. It also permits the web host to edit and restrict access to content posted by users and third parties, even if the material would be constitutionally protected. As tech giants like Google, Facebook, and Twitter have risen to prominence and now choose what speech and content to allow or ban, the clause has become highly controversial.
Enacted in 1996, The Communications Decency Act was originally created because at the dawn of the internet, many lawmakers were concerned about the internet’s ability to promote “filth,” like child pornography and violence.
Senator Ron Wyden (D-OR) and Representative Chris Cox (R-CA) ensured Section 230 of the CDA was added at the last minute in 1996 “to protect the unhindered growth of the internet and free speech.” They were particularly concerned about Stratton Oakmont, Inc v. Prodigy Servs. Co., a 1995 court case which found an online bulletin board operator liable for content posted by users. To circumvent the court’s decision so that online content platforms would not be required to moderate all content posted by their customers and publishers, Section 230 freed online service providers from the legal responsibility of publishers.
According to the Congressional Research Service, “Courts have interpreted Section 230 to foreclose a wide variety of lawsuits and to preempt laws that would make providers and users liable for third-party content.” Section 230 immunity allows service providers, like Twitter and Facebook, to “act in good faith” to restrict content that provokes violence, is obscene, or is overly explicit.
Nearly 20 years later, Section 230 is widely viewed as one of the most impactful and influential pieces of legislation ever created when it comes to the media and the U.S. information economy.
Platforms’ unregulated ability to moderate content strikes at the crux of an accelerating debate between legislators, corporations, and now, billionaires. Mainstream rhetoric on both sides of the aisle in Washington calls to reform or abolish Section 230, but for very different reasons.
Explore all the arguments, stances and perspectives around Section 230 and the regulation of online platforms. Keep in mind that stances aren't mutually exclusive — some people might have viewpoints that align with multiple stances.
Core Argument: Social media platforms have an anti-conservative bias, leading to unfair moderation practices. They ban, suspend, or otherwise limit conservative speech far more than left-wing speech, creating an imbalance in political discourse.
- Google, Facebook and Twitter have vague policies that limit accountability. We should remove the immunity Big Tech companies receive under Section 230 unless they submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.
- Big Tech platforms like Google, Facebook and Twitter are hypocritical and operate under an “open internet for thee but not for thee” principle — they advocate for an open and free internet with no restrictive gatekeepers who would block or throttle disfavored content, while also moderating certain content and speech on their platforms.
- Facebook, Twitter, and Google receive liability relief for the messages they carry, just like a telephone or electrical utility, but with none of the duties of nondiscrimination. Their near monopoly power allows the leaders of these private companies to indulge their personal preferences, imposing them on the country’s political discourse.
- Section 230 needs to be reformed to prevent platforms from covering up or preventing the spread of important news stories that are not favored by the left-dominated press, like the Hunter Biden laptop story.
- Legislators must deliberately overturn precedent set by the Spy Phone Labs LLC v. Google Inc case that allows the legal targeted censorship of any group by online platforms.
Core Argument: The moderation practices of online platforms are no longer consistent with the spirit of Section 230, which was designed to promote free speech.
- Guaranteeing free speech and nondiscrimination on dominant internet platforms will not crush online innovation; reasonable controls will protect free speech and allow our political culture to flourish.
- By engaging in censorship, Big Tech companies are behaving more like publishers than like platforms protected by Section 230. When social media companies behave as publishers, using their own editorial opinion of what should be seen or censored or adding their own comments, they should not be allowed legal protections under Section 230; they should be treated like newspapers or other publications.
- Legislators should replace or clarify vague terminology in (C)(2) of Section 230, like “otherwise objectionable” and “good faith,” which allows platforms free reign to remove any content under blanket Section 230 protections.
- Internet platforms have gotten so big and influential that they’re resembling governments without any checks and balances to defend the freedom of speech.
- Big Tech’s politicization of what information Americans can access and when they can access it must be stopped.
- Reform would lead to more transparency and accountability.
- Those that manage the “public square” of free speech should be beholden to the same laws and stipulations as state entities under the First Amendment.
- Designating Web platforms as common carriers would prevent them from exerting unnecessary preference or advantage to particular political, religious, or ethnic groups. Just as cell phone service providers and airlines cannot kick people off their platforms or planes on the basis of political views, neither should internet platforms.
- Practical reform involves narrowing Section 230 immunities so that egregious censorship once again becomes a bad choice for social media companies; we can limit social media’s power to suppress voices without growing government.
- By engaging in censorship, Big Tech companies are behaving more like publishers than like platforms protected by Section 230. If social media companies are going to behave as publishers, they should not be allowed legal protections under Section 230; they should be treated like newspapers or other publications.
Core Argument: The impact of changing Section 230 is unpredictable and will have chaotic outcomes.
- Section 230 has allowed internet platforms to grow exponentially, contributing money and jobs to the economy and driving innovation.
- Weakening Section 230 would cause platforms to be fearful of lawsuits so they would over-moderate, limiting free speech. Comment sections on many sites would close for fear of what users might put on them, and there would be frequent outages of services people use on a daily basis – especially social media – while companies attempt to find ways to minimize risk to themselves. Content moderators might be overzealous in their decision-making process on what to leave up and what should be taken down.
- Section 230 fosters free speech, and any attempt at changing it would raise First Amendment rights issues.
- Revoking Section 230 would open up online platforms to an onslaught of litigation.
- Content moderation needs to be handled platform by platform, and online communities need to establish rules and community standards — the government will not be competent at addressing complex political questions.
- Revoking Section 230 would have a disproportionate impact on smaller or medium-sized platforms and apps that cannot afford to police their sites to stay compliant or bear the brunt of long, drawn-out, expensive lawsuits.
- If a federal statute does not implement a replacement policy, the vacuum left by revoking Section 230 would lead to a confusing patchwork of state-level liability protections.
- Concerns about platforms today are more about their scale than the nature of the platforms, and revoking Section 230 is not the answer to fix these issues.
- Revoking Section 230 puts platforms in an impossible position in which they either allow any kind of offensive content onto their platforms or face liability for offensive content that slips through their moderation system.
- Designating online platforms as “common carriers” might make offensive content worse and would prevent platforms from at least making an effort to moderate content.
- Claims that platforms are biased towards conservatives have been debunked by multiple studies.
- Limiting content moderation protections could hinder platforms’ ability to limit bullying and harassment, which can create “real-world dangers” for LGBTQ people and other minorities.
Core Argument: Section 230 immunities have allowed platforms like Facebook and Instagram to become hotbeds of misinformation, demonstrating the need for reform.
- Platforms have allowed falsehoods surrounding coronavirus, the 2020 election, and the Ukraine war to be shared exponentially on their sites without noticeable improvement in moderation.
- Beyond misinformation, lack of platform accountability under Section 230 protections has led to atrocities. According to the UN, Facebook misinformation had a “determining role” in the Genocide of the Rohingya in Myanmar. In addition, it allows horrible things like revenge porn to go unchecked.
- Misinformation gets more clicks than real news on Facebook, and teens who spend more time on platforms with Section 230 immunities are more likely to be depressed.
- Section 230 has emboldened online platforms to moderate content, arrange ads, and implement algorithms that have threatened the health and safety of its users.
- Section 230 allows internet companies to operate in a “regulation-free” zone. To fix this, it must be weakened to remove some immunities.
- The government should subject online platforms to intense scrutiny of their algorithms, ensuring that they protect the interests of the consumer.
- “Carve-outs” should be created to prevent particularly egregious behavior from occurring unchecked on these sites. Particularly, a “Bad Samaritan” carve out would prevent platforms from promoting illegal activity.
Are we missing a stance or perspective? Email us!
Ethan Horowitz, News Assistant (Lean Right bias)
Henry A. Brechter, Managing Editor (Center bias)
Julie Mastrine, Director of Marketing and Media Bias Ratings (Lean Right bias)
Joseph Ratliff, Daily News Editor (Lean Left bias)
John Gable, CEO (Lean Right bias)