Who should police content on the Internet?

The beauty, and the danger, of the internet is that it's open to everyone.  Anyone can put up a website, about pretty much anything.  This "open platform" is an amazing thing, and means that innovation can come from all corners, without barriers or gatekeepers.  It also introduces new challenges for how to deal with the inevitable bad things that come along with the good. This past week, this question has come back to the foreground with the Charlottesville riots and the associated far-right websites that helped organize them.  Particularly in focus has been the website "The Daily Stormer", one of the most vocal/violent/awful neo-nazi sites on the internet.  In recent days, all of the infrastructure providers that served the Daily Stormer have dropped it, and it has relocated to a Russian domain.  As of this writing, it appears that Anonymous has already DDOS'd dailystormer.ru and it is offline. One of the companies that initially resisted dropping the Stormer, but ultimately did, was (USV portfolio company) Cloudflare. Cloudflare has taken heat for some time now for its insistence not to drop the Stormer, dating back to this ProPublica article from May.  In Cloudflare's response to that article, CEO Matthew Prince included the following:

"Cloudflare is more akin to a network than a hosting provider. I'd be deeply troubled if my ISP started restricting what types of content I can access. As a network, we don't think it's appropriate for Cloudflare to be making those restrictions either. That is not to say we support all the content that passes through Cloudflare's network. We, both as an organization and as individuals, have political beliefs and views of what is right and wrong. There are institutions — law enforcement, legislatures, and courts — that have a social and political legitimacy to determine what content is legal and illegal. We follow the lead of those organizations in all the jurisdictions we operate. But, as more and more of the Internet sits behind fewer and fewer private companies, we're concerned that the political beliefs and biases of those organizations will determine what can and cannot be online."

This is a difficult line to walk, but it's actually really important to the underpinnings of the Internet.  To understand why, you have to think about all of the bad things that happen on the internet every day -- from really bad things like neo-nazi genocide organizing (I am writing this as someone whose great grandfather was murdered for being a Jew) and child exploitation, all the way to marginally or arguably not-so-bad things like, "I don't like what this person wrote on this website and I want it taken down". So, from the perspective of someone operating internet infrastructure, you are constantly bombarded with requests to take down things that people don't like, for one reason or another.  This is unsustainable for two reasons: 1) the pure scale of it, especially for larger properties handling millions or billions (or trillions, in the case of Cloudflare) pageviews and 2) platforms are almost always not in the best position to make a just determination about whether a given piece of content is legal or illegal.  So the position of most large web platforms has been to delegate decisions about the legality of (user-generated) content to law enforcement, the courts, or other actors "at the edges" who are in the best position to make those determinations. From the user/customer perspective, if you think about it, you really don't want your ISP, or DNS provider, or hosting provider making arbitrary decisions about what speech is acceptable and what is not. To further codify this general approach to handling content, we have something called Section 230 of the Communications Decency Act which grants internet intermediaries limited liability when it comes to handling internet traffic and user-generated content (e.g., the speech of others).  Generally speaking (and I am not a lawyer) this means that companies are legally insulated from content that someone else publishes on their platform.  If this were not the case, then it would be impossible, from a risk perspective, to operate any website that handled the speech or content of others (think Facebook, Dropbox, GoDaddy, etc).  If you needed to be 100% certain that every piece of information that any user published on your platform didn't violate any laws anywhere, you would simply not let anyone publish anything.  Or you'd need to have some very draconian/slow editorial & approval process, so we'd have no Twitter, no Instagram, etc. Over the years, every time a new wave of bad activity emerges on the web, there is the inevitable battle about who should be responsible for stopping it. This is what the Stop Online Piracy Act (SOPA) of 2011 was about -- this would have made internet platforms directly liable for any user-generated content that might have copyright violations in it (as opposed to the current situation where sites must comply with valid takedown notices in order to keep their immunity).  This has come up again in 2017 with the introduction of the "Stop Enabling Sex Traffickers Act of 2017" that seeks to limit CDA 230 protections in the name of addressing child exploitation on the internet. The really hard thing here, whether we're talking about piracy, or child exploitation, or neo-nazis, is that tailoring a law that addresses those problems without having broader implications for free speech on internet platforms is really hard. And what we don't want is a world where, rather than an environment of due process, we end up with either platforms making arbitrary, unilateral decisions about the validity of content, or we get the vigilante justice of DDOS attacks knocking websites offline. Cloudflare has done the hard work of defending due process and freedom of expression online.  It's not easy to do this, and it is often unpopular (depending on who is doing the speaking). But in the end, they decided to drop the Daily Stormer from the Cloudflare platform.  In his explanation of why he decided to make this call, Matthew Prince explained it this way, in an email to the Cloudflare team:

"This was my decision. Our terms of service reserve the right for us to terminate users of our network at our sole discretion. My rationale for making this decision was simple: the people behind the Daily Stormer are assholes and I’d had enough. Let me be clear: this was an arbitrary decision. It was different than what I’d talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because I’m the CEO of a major Internet infrastructure company. Having made that decision we now need to talk about why it is so dangerous. I’ll be posting something on our blog later today. Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power."

This is intentionally provocative, and meant to help everyone understand why it's dangerous to encourage large internet **infrastructure** providers to take editorial control.  For while it may seem obvious that this is the right call in this case, there are literally millions of other cases every day which aren't so clear, and around which we really should be aiming to have due process to guide decisions. I would encourage you to read the follow-up piece on the Cloudflare blog discussing why they terminated the Daily Stormer - in it Matthew details out all of the kinds of players in the internet infrastructure space, what role they play, and how they impact free speech online. In all of this, there is an important distinction between what platforms are **legally required** to preemptively take down, and what they are **within their rights** to remove.  A tension in the industry is a hesitation to exercise corporate rights to remove content, at the risk of sliding towards a legal regime where platforms have a positive obligation to remove content -- this is what introduces the greatest risks to free speech and due process. Another key point, which is raised in the Cloudflare post, is the different roles played by various types of internet providers. There is a difference between low-level providers like DNS servers, backbone transit providers, etc.; and high-level applications like social networks, marketplaces, and other, more narrowly-focused applications.   Generally speaking, the higher up in the stack you go, and the more competition there is at that layer, and the more specific your application or community, the more it makes sense to have community guidelines that limit or direct what kinds of activities can take place on your platform. Lastly, none of this is to say that platforms don't and shouldn't partner with law enforcement and other authorities to remove illegal content and bad actors.  This is actually a large part of what platforms do, every day, and it's critical to the safe functioning of the internet and of social platforms. But perhaps the big takeaway here is that, as we continue to discuss where enforcement and censorship should take place, we should fall back on the underlying belief that transparency, accountability and due process (and not arbitrary decisions by powerful companies or outside groups) are critical components of any solution.

Collect this post to permanently own it.
The Slow Hunch by Nick Grossman logo
Subscribe to The Slow Hunch by Nick Grossman and never miss a post.
  • Loading comments...