Section 230 and the Future of the Internet

Before the Supreme Court are two cases that question an immunity clause in the 1996 Communications Decency Act. Part of this act included a section (Section 230) that protected companies that hosted user content from being sued over the content the user posts. In particular, one line of the section specifies:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (47 U.S.C. § 230(c)(1)).

In essence, this law protects one entity from the actions of another. This makes sense. In a similar way that telephone companies are not responsible for what a user of a telephone says on the line, Internet companies are not responsible for what a user of that platform says or posts on that platform. 

Without this immunity, many of the large Internet companies would struggle to persist. Amazon depends on user comments on products.  Google’s subsidiary YouTube exists entirely on user generate video.  Facebook, Twitter, and other social media companies depend on immunity.  Wikipedia as well.   

Gonzalez v. Google

What’s questioned, however, is whether or not these companies are responsible for the algorithms used to rank user generated content.  In Gonzalez v. Google, the family of a terrorist attack victim claims that Google’s algorithm allowed the terrorist to become radicalized by promoting ISIS created content.  Gonzalez claims that the algorithm falls outside the scope of Section 230 and that anti-terrorism laws should guide the legal case.  Google disagrees and claims that the algorithm is neutral in its recommendations and hence still protected by Section 230. 

Twitter v. Taamneh

What’s also in question is the extent moderation protects companies from other laws.  In Twitter v. Taamneh, Taamneh claims Twitter was not aggressive enough in preventing terrorists from using their services. That through lax moderation, Twitter aided and abetted terrorists in committing their crimes and subject to anti-terrorism laws. Twitter, obviously, disputes this claim. 

What’s at stake

Google and other major tech companies worry that nullifying Section 230, even in part, would be a horror show. Without these algorithms, Google claims, the Internet would either be a free for all.  Alternatively, companies such as Google would have to moderate content into nothing. 

What Google fails to address is the current horror show with the algorithms.  One study found that social media companies are the primary responsible parties for spreading misinformation. Algorithms designed to drive engagement, not thoughtfulness, tap into our base emotional desires. Misinformation thrives in such an environment because emotions are notoriously bad at checking on truth. Another study found that social media, through likes, shares, and algorithms, may be a driving factor in anxiety and depression in youths. 

Furthermore, such algorithms promote and recommend content. Despite what Google claims, such algorithms by default cannot be value neutral because they inherently contain choices about what factors are important for promotion.  Tech companies control these algorithms.  Even if we agree with Section 230 should immunize Internet companies from lawsuits over user generated content, I don’t see how that immunization applies to algorithms that these companies control and use to promote content.  That’s where I see the flaw in their argument.

Section 230 and the Future

Would removing immunization from algorithms fundamentally change the Internet?  Undoubtedly.  But whether that change would be better or worse is an open question. Given the problems we currently have, I’m inclined to think it would be for the better.  At least better after the economic shift that would occur should immunity over the algorithms is lifted. 

 

You may also like...