A brief history
Section 230 of the US Communications Act of 1934, enacted as part of the Communications Decency Act of 1996, is fundamental to the commercial development of the internet over the past 30 years. It has been called “the 26 words that made the Internet” – referring specifically to its subsection, 47 U.S.C. § 230(c)(1)):
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The practical effect of Section 230 is that Internet Service Providers (ISPs), alongside other companies which provide the technical infrastructure that facilitates communication over the internet, cannot be held liable for any of the data which passes over their networks. If the internet is viewed as a public road network, Section 230 renders ISPs analogous to road maintenance workers, responsible for the ability of traffic to flow over its networks, but in no way liable for the drivers of the vehicles or their intentions (whether benign or malign).
The argument for implementing Section 230 in the mid-90s – the point at which the internet was starting to really take off – was that, without protection for ISPs from legal claims pertaining to user generated content (in those days it was mainly Usenet groups), the wings of the internet would be clipped. If ISPs were legally responsible for all the traffic flowing over their networks, akin to newspaper editors who can be sued for any defamatory or criminal content, the fear was that their business model would not be viable and this could stifle the commercial success of the internet.
Web 2.0
Although the internet always facilitated communication and, theoretically, anyone could publish their own content in the form of a website (or post comments on Usenet groups and early chatrooms), in practice not many people had the technical knowledge or inclination to learn HTML and the majority of internet users were passive. It wasn’t until 1999 that the Blogger platform popularised online self publishing and made it accessible to non-technically minded individuals. A few years later in 2003, with the launch of MySpace – the first mass market social media platform – user generated content finally became mainstream. This was quickly followed by Facebook in 2004 and the popularisation of the term Web 2.0 (by Tim O’Reilly) the same year.
Where Web 1.0 was a largely passive experience, with navigation likened to “surfing” TV channels, Web 2.0 was all about participation. With the launch of YouTube in 2005, everyone had their own TV channel. The phenomenal growth of social media, from around 1 million in 2004 to an estimated 5.6 billion (around two thirds of the total global population) in 2025, led to a huge proportion of internet users becoming web content producers as well as consumers.
The transformation of the internet during the Web 2.0 era – and the proliferation of user generated content in the form of websites and blogs, videos and memes, social media posts and comments – would have arguably been impossible without Section 230. And although it’s a piece of US legislation, the principle of not holding ISPs or social media companies liable for user generated content has largely been upheld by courts around the world. For example, in the UK the 2006 High Court case of Bunt v Tilley [2006] EWHC 407 (QB) essentially held that an ISP was not liable for the defamatory posts of its users.
Pros and cons of Section 230
The main reason that Section 230 has effectively become global internet “policy”, with most jurisdictions giving it the nod either in the form of local legislation or case law, is that holding ISPs and other service providers liable for the content of users would arguably harm a significant portion of the economy. Aside from the jobs created directly by Big Tech, micro-industries such as influencing, and the entire social media eco-system would be decimated if platforms were designated as publishers.
The other big argument in favour of Section 230 is freedom of speech. Organisations such as the Electronic Frontier Foundation regularly campaign to retain the federal law, claiming that its repeal would harm online freedom of expression and prevent diverse political discourse. But it’s exactly this unbridled freedom of speech and expression online which has led to some of the most negative impacts of the modern internet, and which lies at the core of many of the arguments against Section 230. Although there are few voices calling for less freedom of speech, there is a growing consensus that social media companies in particular should provide more moderation, highlighted by the exodus from X/Twitter after the decision by Elon Musk to cut the safety team.
Some governments are beginning to grapple with the child safety concerns related to unfettered internet access, with a social media ban for under 16s coming into force in Australia in 2025 and now on the cards in other countries including the UK. But disturbing, threatening and dangerous user generated content on social media platforms and other online forums is not just harmful for children; many adults are also suffering as a result of exposure to the deluge of trolling, misinformation, unmoderated video, AI brainrot and information overload. The mental health crisis in the UK, with almost one in five adults on anti-depressants and an explosion of psychiatric diagnoses amongst children and young people, may be in large part related to the harmful effects of social media.
The current threat to Section 230
As calls to hold social media companies to account continue to grow across the world, there are political moves afoot in Washington to start winding down Section 230 on its 30th anniversary. The Sunset Section 230 Act is a bipartisan bill, brought forward by democratic senator Dick Durbin and his republican colleague Lindsey Graham, which seeks to “repeal Section 230 two years after the date of enactment” and will effectively remove the immunity from liability for user generated content on internet platforms. The proposal has received cross party support, as well as the endorsement from a myriad of child safety organisations and Hollywood. There has been expected pushback from free speech organisations and some academics who are concerned that the repeal of Section 230 could either lead to a chilling effect on freedom of expression and blanket censorship of user generated content, or conversely could prompt internet companies to completely abandon moderation (so they could argue they have no knowledge of user behaviour).
Aside from direct efforts to repeal Section 230, there are mounting challenges to social media companies from other angles. A landmark social media trial is underway in California in which a young woman, identified by the initials KGM, claims that the intentional design of social media algorithms have caused her addiction and negatively affected her mental health. The defendants originally comprised Meta (Instagram and Facebook), Google (YouTube), TikTok and Snapchat, but the latter two companies settled with the plaintiff prior to the opening of the trial. So far, Google has claimed that its video sharing platform is not actually a social media website, and Meta has argued that KGM’s mental health problems stemmed from family trauma, with Instagram CEO denying that social media can be “clinically addictive”. Providing testimony as part of the trial, Meta CEO Mark Zuckerberg tried to defend against accusations that his platforms have specifically targeted children. It’s worth noting that the claims brought by the plaintiff in this case circumvent Section 32, focusing instead on the design of the social media platforms and their algorithms. It’s well known that social media companies try and make their platforms highly addictive to maximise their profits – with one of the founders of Facebook even admitting this type of psychological manipulation was intentional in 2017 – so the legal arguments will likely rest on the medical terminology of “addiction” and whether an addiction to social media can constitute a medical condition under the internationally recognised manual of psychiatric diagnoses DSM 5. Even if this case fails to open the floodgates, there are reportedly thousands of similar cases in the pipeline.
Other threats to the protection afforded to social media companies by Section 230 include bans for under 16s, initially in Australia and now being considered in many jurisdictions across the world. Although these bans do not make the companies liable for user generated content, they do place duties on them to prevent children accessing their platforms. Another recent development in this arena relates to the use of AI to create deepfake obscene material and propagate this content on social media, with both the UK government and EU counterparts making moves to outlaw the practice.
Beyond Section 230
So what would happen if Section 230 is repealed, or if governments clamp down further on social media providers and make them liable for any user generated content? One possibility is that they would simply pull out of any jurisdictions where they think regulations are too onerous. Some opponents to repealing Section 230 argue that providers might completely abandon moderation efforts and try to avoid liability of any content by denying knowledge – but this would likely fail if the regulations are sufficiently robust. In reality, they would be forced to attempt to moderate their platforms more effectively – but would this work?
It has been estimated that in excess of 500 hours of video are uploaded to YouTube every minute. This equates to almost 263 million hours per year, which would take around 500 million human moderators working in shifts around the clock. Even using the lowest paid moderators in low income countries at a dollar an hour, as favoured by tech companies, this would obviously kill their business model overnight. As such, AI moderation tools would be heavily relied upon to meet obligations under new regulations – but considering that social media companies are still using thousands of human moderators, this implies that AI moderation does not work properly.
In the face of a repeal of Section 230, or a new legal framework which exposes platforms to mass litigation, either AI moderation would need to significantly improve or else social media companies might be forced to close their core businesses. But this would go further than social media, and the very nature of Web 2.0 would be thrown into question.
Conclusion
Even if Section 230 isn’t repealed, greater social and political awareness of the dangers of social media and the consequent regulations, alongside potential landmark cases which penalise internet providers for the harm caused by user generated content, are increasingly challenging the cyber-libertarian principles which have arguably dominated the internet since its inception. Rather than moving to a decentralised Web 3.0, it is possible that we might instead move back to some form of Web 1.0, where user generated content is the exception rather than the norm. Although this may seem like a revolutionary idea after two decades of social media, the foundation of the internet had countercultural elements at the time – so perhaps we have just gone full circle.
Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.
Photo by Lue lisää on Piqsels.