Section 230

May 2021

Section 230

The Communications Decency Act (CDA) of 1996 includes a tool for protecting freedom of expression and innovation on the internet – Section 230.
In the nineties there was a proliferation of startups offering publishing services and the atmosphere favored innovation. Americans debated how to handle all kinds of objectionable content on the Web, including hate speech and defamation (Retroreport, 2021).
The motivation for Section 230 came after a case in 1995, in which the Internet Service Provider (ISP) Prodigy Services, Co. was considered a publisher (Dwoskin, 2020).
The growth of technology and the internet has been exponential in the past decades. In 1995 no one could foresee the growth and power of social media, blogs, video-blogs, and all other forms of communication that are not considered “publishers” under the law (Brannon, 2019).
In recent years Section 230 has been in the news. Both republicans and democrats claim the law is favoring the opposite side. President Trump threatened to revise it in 2020 after Twitter labeled many of his posts. The labels read “This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible” (Dwoskin, 2020).
Long before Trump’s use of the platform for controversial statements, Twitter users, and particularly women and people of color, had also complained about the company doing nothing when they complained of harassment and abuse. In late 2017, a month after congressional hearings on social media and Russian interference, a period of soul-searching began across social media companies. The following year, Facebook launched a major fact-checking effort and hired tens of thousands of content moderators to police its service. Twitter began purging large numbers of fake accounts, banned “dehumanizing speech” against certain categories of users, and embarked on a broad effort to solicit public comment about its speech policies. But as the companies became more aggressive in policing their services and setting rules, they continued to exempt politicians, arguing that their comments were too “newsworthy” to censor (Dwoskin, 2020).
Since before the 2016 elections, algorithms, based on an ever-growing data bank, target one user at a time, allowing individuals to see customized news and opinions. The Cambridge Analytica / Facebook case in 2018 called for a reexamination of online privacy rules (Byers, 2021).
Around the world, governments were elected or brought down because of social media. WhatsApp (a company owned by Facebook) favored Bolsonaro during the Brazilian elections (Avelar, 2019 & Alves dos Santos, 2019). In Myanmar, social media facilitated communications that led to a genocide (Mozur, 2018). With the COVID pandemic, social media was a double-edged sword, allowing both scientific and mistaken information to circulate. The world is debating the social responsibility of social media, while also enjoying its undoubtedly positive aspects (Alves dos Santos, 2019).
In the United States there is a consensus that Section 230 is outdated and needs to be reviewed. It remains unclear, however, how it needs to be updated (Byers, 2021).

Background Analysis

Senator Ron Wyden (D) and former Representative Chris Cox (R) co-authored the 1996 section with the intent of relieving the burden on tech companies. At the time, courts held that Internet providers like Prodigy, which moderated some user content, were potentially liable for anything their users posted. Companies faced a choice: clean up their websites and risk getting sued or go totally hands off and face no legal consequences.

Wyden claimed that “we wanted small businesses to focus on hiring engineers, developers and designers, rather than worrying about whether they had to hire a team of lawyers” (Tracy, 2021).

Section 230 was enacted in the CDA’s Section 509, titled “Online Family Empowerment,” which in its turn had started with the problem of pornography online (former Nebraska senator James Exon wanted the government to clean up the Internet by effectively making indecent material like porn illegal online).

The provision responded to a 1995 decision issued by a New York state trial court: Stratton-Oakmont, Inc, v. Prodigy Services, Co.

The plaintiffs in that case were an investment banking firm. The firm alleged that Prodigy, an early online service provider (ISP), had published a libelous statement that unlawfully accused the firm of committing fraud. Prodigy itself did not write the allegedly defamatory message, but it hosted the message boards where a user posted the statement.

The New York court concluded that the company was nonetheless a “publisher” of the alleged libel and therefore subject to liability. The court emphasized that Prodigy exercised “editorial control” over the content posted on its site, actively controlling the content of its message boards (Brannon, 2019).

Section 230 sought to abrogate Stratton-Oakmont. Representative Chris Cox argued on the House that the ruling against Prodigy was “backward.” Representative Cox referenced a different case in which a federal district court had held that CompuServe, another early online service provider, could not be held liable for allegedly defamatory statements posted on its message boards. Both Representative Cox and his co-sponsor, then-Representative Ron Wyden, emphasized that they wanted to allow online service providers, working with concerned parents and others, to be able to take down offensive content without exposing themselves to liability.

These provisions were intended to ensure that even if online service providers did exercise some limited editorial control over the content posted on their sites, they would not thereby be subject to publisher liability – hence, they were called “Good Samaritan” provisions (Brannon, 2019).

Analysis

Three decades later, section 230 has become a source of debate. A contested part of it reads “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[1]

The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” which includes any service that publishes third-parties content – which includes social media.

These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops. The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.

One of the contentious consequences of Section 230 was seen already in 1995, when an anonymous user on AOL impersonated a man named Kenneth Zeran and used his name and phone number to sell T-shirts glorifying the Oklahoma City bombing. After receiving a tsunami of threatening phone calls, Zeran begged AOL to take down the anonymous user’s ads. Zeran sued AOL in 1996. But the court said Section 230 allowed AOL to leave up the posts even after Zeran reported them (Retroreport, 2020).

Today, a few are of the opinion that site operators should only enjoy immunity from liability if they’re engaged in reasonable content moderation practices. There here has to be an exchange – a shield from liability comes with responsibility to moderate (Retroreport, 2020).

Former president Trump had his account suspended for an “indefinite time” by Facebook in January 2021. White Twitter banned the former president forever, Facebook had the former president wait to hear if he can retrieve his accounts. The case required the creation of a board, which was funded by Facebook through a $130 million independent trust. It was made up of 20 experts from around the world, including specialists in law and human rights, a Nobel Peace laureate from Yemen, the vice president of the libertarian Cato Institute, the former prime minister of Denmark and several journalists.

The board decided on May 5, 2021 that Facebook was attempting to “avoid its responsibilities” by imposing an indefinite suspension — which the board said was as “a vague, standardless penalty” — and then asking the board to make the final call (Bond, 2021).

The Board declined Facebook’s request and insisted that Facebook applied and justified a defined penalty (Bond, 2021).

With the advancements of technology, more creative solutions like boards may have to be created – even though in the Trump-Facebook case the board represented only another step in the process of the decision to suspend or not the former president’s accounts for good.

Machine Learning and Machine-Generated Content

The power and penetration of machine-learning algorithms represent the essence of the question about the protection offered to companies by Section 230 (Tremble, 2018).

In March of 2021, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation (Johnson, 2021).

The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word “algorithm” alone was used more than 50 times (Johnson, 2021).

Pointing to YouTube’s recommendation algorithm and its known propensity to radicalize people, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced the Protecting Americans from Dangerous Algorithms Act[2] back in October 2020 to amend Section 230 and allow courts to examine the role of algorithmic amplification that leads to violence. Next to Section 230 reform, one of the most popular solutions lawmakers proposed was a law requiring tech companies to perform civil rights audits or algorithm audits for performance. After the bombast and bipartisan recognition of how AI can harm people on display during the hearings, the pressure is on Washington, not Silicon Valley (Johnson, 2021).

Tremble (2018) analyzed Section 230 through a complaint filed in the Eastern District of New York on August 10, 2016, which formally accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user generated posts and groups promoting and directing the perpetration of terrorist attacks. Under section 230, Interactive Service Providers such as Facebook cannot be held liable for user generated content where the ISP did not create or develop the content-at-issue.

However, this complaint stood out because it sought to hold Facebook liable not only for the content of third parties, but also for the effect its personalized machine learning algorithms—or “services”—have had on the ability of terrorists to execute attacks. By alleging that Facebook’s actual services, in addition to its publication of content, allow terrorists to more effectively execute attacks, the complaint sought to negate the applicability of section 230

Immunity (Tremble, 2018).

Tremble (2018) argued that Facebook’s services—specifically the personalization of content through machine-learning algorithms – constitute the “development” of content and as such do not qualify for section 230 immunity.

Tremble (2018) proposes a new framework, guided by congressional and public policy goals to create brighter lines for technological immunity and tailor immunity to account for user-data mined by ISPs and the pervasive effect that the use of that data has on users—two issues that courts have yet to confront.

[1] 47 U.S.C. § 230 – https://www.govinfo.gov/content/pkg/USCODE-2011-title47/pdf/USCODE-2011-title47-chap5-subchapII-partI-sec230.pdf

[2] https://trackbill.com/bill/us-congress-house-bill-8636-protecting-americans-from-dangerous-algorithms-act/1948816/

Conclusion

While section 230 has aided the development and growth of the internet, that purpose may have to a large extent been served. Internet companies are some of the most powerful and profitable companies in existence in 2021 (Tremble, 2018).

Because of today’s algorithms, social media platforms such as Facebook and Twitter have much more agency in the penetration of information than ISPs had back in 1996.

The amendment proposed in 2020 by Representatives Eshoo and Malinowsky (HR8636) represents an important update to Section 230. It proposes algorithms become unprotected. Section 230 applies in all cases, ‘‘(II) Except as provided in clause (ii), the claim involves a case in which the interactive computer service used an algorithm, model, or other computational process to rank, order, promote, recommend, amplify, or similarly alter the delivery or display.”

On the other hand, there is a need to underscore the importance of understanding the interplay between Section 230 and the First Amendment. Even without Section 230, the First Amendment will remain (Internet Association, 2020).

Painting Section 230 with too broad a brush may ignore its wide impact and the implications of significant changes to the law. The Internet Association’s review (2020) pointed to areas where additional work needs to be done to understand how Section 230 impacts litigation today and whether changes would alter the end result of litigation or simply make it more costly.

References

References
Alves dos Santos, M. (2019). Desarranjo da Visibilidade, Desordem Informactional e Polarizacao no Brasil entre 2013 e 2018 [Disruption of Visibility, Informactional Disorder and Polarization in Brazil between 2013 and 2018]. (Doctoral dissertation). Retrieved from https://www.academia.edu/41690755/Desarranjo_da_visibilidade_desordem_informacional_e_polariza%C3%A7%C3%A3o_no_Brasil_entre_2013_e_2018
Avelar, D. (2019, October 30). WhatsApp fake news during Brazil election ‘favoured Bolsonaro.’ The Guardian. Retrieved from https://www.theguardian.com/world/2019/oct/30/whatsapp-fake-news-brazil-election-favoured-jair-bolsonaro-analysis-suggests
Bond, S. (2021, May 5). Facebook Ban On Donald Trump Will Hold, Social Network’s Oversight Board Rules. NPR. Retrieved from https://www.npr.org/2021/05/05/987679590/facebook-justified-in-banning-donald-trump-social-medias-oversight-board-rules
Brannon, V. (2019). Liability for Content Hosts: An Overview of the Communication Decency Act’s Section 230. Congressional Research Service. Retrieved from https://fas.org/sgp/crs/misc/LSB10306.pdf
Byers, D. (2021, March 24). Zuckerberg calls for changes to tech’s Section 230 protections. NBC News. Retrieved from https://www.nbcnews.com/tech/tech-news/zuckerberg-calls-changes-techs-section-230-protections-rcna486
Dworskin, E. (2020, May 29). Twitter’s decision to label Trump’s tweets was two years in the making. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2020/05/29/inside-twitter-trump-label/
Banker, E. (July 27, 2020). A Review of Section 230’s Meaning & Application Based on More Than 500 Cases. Internet Association. Retrieved from https://internetassociation.org/wp-content/uploads/2020/07/IA_Review-Of-Section-230.pdf
Johnson, K. (2021, March 26). AI Weekly: Algorithms, accountability, and regulating Big Tech. Venture Beat. Retrieved from https://venturebeat.com/2021/03/26/ai-weekly-algorithms-accountability-and-regulating-big-tech/
Mozur, P. (2018, October 15). A Genocide Incited on Facebook, With Posts From Myanmar’s Military. The New York Times. Retrieved from https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html
Retroreport (2021). Trump and Biden Both Want to Repeal Section 230. Would That Wreck the
Internet? Retroreport. Retrieved from https://www.retroreport.org/transcript/how-26-words-built-the-internet/
Tracy, R. (2021, March 24). Facebook’s Zuckerberg Proposes Raising Bar for Section 230. Wall Street Journal. Retrieved from https://www.wsj.com/articles/facebooks-zuckerberg-proposes-raising-bar-for-section-230-11616610616
Tremble, Catherine. (2017). Wild Westworld: The Application of Section 230 of the Communications Decency Act to Social Networks’ Use of Machine-Learning Algorithms. SSRN Electronic Journal. doi: 10.2139/ssrn.2905819. Retrieved from http://fordhamlawreview.org/wp-content/uploads/2017/10/Tremble_November_v86.pdf&EXT=pdf&INDEX=TRUE
U.S. Department of Justice (2020). Section 230 – Nurturing Innovation or Fostering Unaccountability? Retrieved from https://www.justice.gov/file/1286331/download

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt