Large Internet platforms have always moderated the content they allow on their services, and Section 230 of the Federal Communication Act allowed for the growth of user supported platforms, by not holding them to many of the same liabilities a publisher would face1. Yet today, we’re seeing unprecedented censorship, editing and moderation of platforms with massive userbases. The Internet of today has corporate super powers, virtual nations, that can effectively dismiss voices and points of view that do not fit their narrative. When Twitter is effectively able to censor the reach of articles from established newspapers, we have to ask what meaningful reform can be made to Section 230, to ensure every American has control over the way their voice is presented to the world. The following are a few proposals for reforming the law, and also the challenges in implementing them.
1. No Editorializing on other peoples’ content. No fact checking. No disclaimers.
The Watchmen, a 1986 comic book series, asked, “Who Watches the Watchmen?” In the comic’s post modern approach to superheros, the phrase is graffitied on city walls, as a cry for accountability for heros who claim to fight for the people. In our post-fact world, social media platforms have added fact checks to content, or links to official government websites under videos that question common narratives. On May 26th, in an unprecedented move, Twitter added a warning to one of President Trump’s tweets. In doing so, Twitter may have crossed over the line of protection, set fourth in Section 230, moving Twitter from the role of a platform to a publisher2.
Many of these fact checks are misleading. In some cases, they even fact check hyperbolic expressions that were never intended to be taken literally3. When Facebook or YouTube adds commentary to someone else’s content, they are acting as an authority; not just a secondary source. The vast majority of people will accept that commentary, without going through the effort to lookup the facts themselves.
Platforms should not be allowed to add commentary to other peoples’ content. Period. These fact checks are disingenuous at best, and outright propaganda at worse. They discourage people from looking up their own secondary sources, and profess a level of authority which a platform should not condone. Big tech is often telling someone else’s audience what the science is with an authority that diminishing the true nuance and complexity of science. They’ve silenced the voices of experts they don’t like, in pure hubris and arrogance4.
2. Platforms Should Not Moderate Private Conversations
This one should be obvious: Do you want to live in a world where AT&T decides to cut of your phone conversation because you say a banned word? When Twitter and Facebook blocked the New York Post article on Hunter Biden’s laptop, several platforms prevented users from sharing links to the article in personal, direct messages. Although an argument can be made for blocking URLs for SPAM and Malware filtering, blocking a link to a major news website obviously doesn’t fit into that camp. It was straight up censorship. The technological solution may be to switch all your conversation to a tool that has end-to-end encryption, but practically it’s difficult to get people you want to communicate with to install yet another mobile application.
3. Your Location Should Be Portable
In the United States, The Telecommunication Act of 1996 allowed for the portability of phone numbers from one provider to another1. Many countries have similar laws, allowing land line and cellphone numbers to be transfered to competitors. The open source social networking platform Mastodon has detailed instructions of exporting data from a server, as well as a means to indicate where your account has moved to5. If a platform decides to disable or ban someone’s account, there should always be a means to download one’s personal data and to indicate where someone has moved to. They should always be allowed to display a new URL or e-mail address, to indicate where they can be found or contacted.
4. Users should be in control of their timeline experiences, with the default being reverse chronological or inbox sorted alphabetically
In the not so distant past, every content platform presented posts as they occurred, in chronological order. This included the old pioneers like LiveJournal and MySpace, and even those who survived the attention wars like Facebook and Twitter. Yet today, nearly every platform has abandoned chronological order and instead, uses hidden algorithms to present users with content, in an order that seeks to maximize engagement. Few outside these companies have any idea how these algorithms work.
In real life, someone who doesn’t talk much is often given more attention when they do decide to open their mouths. This varies depending on the person of course, and if what they say is typically insightful or stupid. Social networks do not operate in the same fashion. People who rarely participate, rarely get promoted. Social networks need to encourage constant use, negating any silent bob effect.
Legislation for a default user controllable view would be difficult and horribly problematic. Facebook and Twitter make a lot of revenue from sponsored content that’s boosted into the regular feed. It’d also be difficult to apply such rules to sites like Reddit or Hackernews, where content is aggregated up and down based on votes of the entire userbase. Knowledge about how the specific algorithms work, is often restricted as a means of spam prevention.
5. Removal of an Account Guarantees a 100% Refund of all Purchases
This may not fall under Section 230 specifically, but it is related tangentially and it is a huge cause for concern. Facebook requires an account on their service in order to use newer Oculus devices. There have been people who’ve created accounts on Facebook just for these devices, and were immediately banned6. Someone who had an active Google account for 15 years, had their account suspended and disabled, with no reason given, and the appeal denied7. With so much information, documents, photos and communications tied up in these platforms, banning accounts with no explanation can be devastating for some people.
America protects the freedom of speech via The First Amendment to the Constitution, however this only applies to government censorship, and not to privately run platforms. The challenge to that is that platforms are becoming so large, they are essentially becoming new public squares. Any type of regulation to those squares must be careful to ensure the rights of people to freely exchange ideas, without causing undue burdens to newer platforms, or creating loopholes that would grant companies more control over their user’s content.
Say the government mandates that a platform cannot editorialize user contributed posts. If that website is hosting the content for free, they are typically making money from advertising. What’s to stop them from running ads in front of someone’s content that contradicts the content; a fact check in the form of an advertisement? If you mandate protections for people whose accounts are banned for policy violations, what if those accounts are banned because they host illegal content? Should a platform still have an indication of where someone has moved if that new site promotes child abuse? If a platform goes out of business, are they required to allow users to download content for a specified amount of time after bankruptcy? What if they do not have the funds to do so?
The Danger of Legislation
If the SOPA and PIPA hearings taught Americans anything, it’s that most legislators do not have the background to fully understand the technological and social implications of the rules they seek to impose on the Internet. Senator Ted Steven’s famous quote, “The Internet is a series of tubes,” is as comical as it is tragic. I fear anything proposed by congress will serve to benefit Big Tech, or themselves, without any forethought for the average citizen or those who develop platforms based on free and open source software.
No matter how succinct or comprehensive one tries to draft legislation for protecting the rights of individuals on the Internet, lawyers for the big megasites will always be working just as hard to push the limits of what they can legally do. This can create a legal war which may not benefit consumers, and would likely also stifle innovation and growth.
One of the original intents of Section 230 was to not constrain the growth of new technologies on the Internet. To get away from the influences of Big Tech, we cannot lose those protections. For both new centralized platforms and new decentralized technologies to flourish, they must also have the same freedoms allowed to the previous generation. Those limitations were aimed at reducing liability, but have not adapted well to platforms so large that moderation can turn into outright censorship.
A large platform can demonize a citizen to the masses, and then remove that person’s ability to respond. Not everyone is an Alex Jones, who can carry an audience and construct a platform of his own. Some end up like the dozens of channels that were removed from YouTube last month8. Independently, few had over 100,000 subscribers, but collectively they had several million9.
The Human Condition
In the past, I’ve suggested that technology is a better solution than legislation. I was rightly criticized for a perspective that doesn’t account for the human element. Attention is a precious resource, and habits around visiting existing websites are difficult to break.
As social media companies increase moderation, many of the new centralized platforms such as Parlor, Gab, Voat, Minds and others, increasingly hold viewpoints distinct to their incoming diaspora. This has had the counter effect of the big platforms like Facebook, Twitter, Reddit and even Mailchimp10 to increasingly lean hard-left. Even distributed platforms such as Mastodon, Pleroma or PeerTube, have some administrators that support ban lists and not allowing communication between servers (Imagine if Yahoo blocked all e-mail from Gmail).
Just as the newspapers in many cities conglomerated into a limited number of outlets that ran distinctly to a certain political side, so will websites narrow their audiences to specific biases. With all of us separated, we are unable to truly communicate; instead watching the world through a straw. These platforms have narrowed our world views, their fact checks breaking our ability to reason for ourselves. For those who are unaware, Big Tech and Big Media are telling people what to think. I’m not sure if meaningful reform is possible, but the current situation directly violates the concept of individual freedom of speech, collectively distorts our world view, discourages diversity of viewpoints, and hinders our ability to communicate freely and frankly with others about contentious issues.
Bret and Heather 46th DarkHorse Podcast Livestream: RBG, Scalia, and the Court Supreme. 19 September 2020. (41m:00s) (Podcast) ↩
After over 15 years of using #google, my account has been permanently disabled…. 21 October 2020. Cleroth. ↩
Mailchimp makes its censorship rules official, outlines right to ban users for “inaccurate” content. 29 October 2020. Rankovic. Reclaim the Net. ↩