The beauty, and the danger, of the internet is that it’s open to everyone. Anyone can put up a website, about pretty much anything. This “open platform” is an amazing thing, and means that innovation can come from all corners, without barriers or gatekeepers. It also introduces new challenges for how to deal with the inevitable bad things that come along with the good.
This past week, this question has come back to the foreground with the Charlottesville riots and the associated far-right websites that helped organize them. Particularly in focus has been the website “The Daily Stormer”, one of the most vocal/violent/awful neo-nazi sites on the internet. In recent days, all of the infrastructure providers that served the Daily Stormer have dropped it, and it has relocated to a Russian domain. As of this writing, it appears that Anonymous has already DDOS’d dailystormer.ru and it is offline.
One of the companies that initially resisted dropping the Stormer, but ultimately did, was (USV portfolio company) Cloudflare. Cloudflare has taken heat for some time now for its insistence not to drop the Stormer, dating back to this ProPublica article from May. In Cloudflare’s response to that article, CEO Matthew Prince included the following:
“Cloudflare is more akin to a network than a hosting provider. I’d be deeply troubled if my ISP started restricting what types of content I can access. As a network, we don’t think it’s appropriate for Cloudflare to be making those restrictions either.
That is not to say we support all the content that passes through Cloudflare’s network. We, both as an organization and as individuals, have political beliefs and views of what is right and wrong. There are institutions — law enforcement, legislatures, and courts — that have a social and political legitimacy to determine what content is legal and illegal. We follow the lead of those organizations in all the jurisdictions we operate. But, as more and more of the Internet sits behind fewer and fewer private companies, we’re concerned that the political beliefs and biases of those organizations will determine what can and cannot be online.”
This is a difficult line to walk, but it’s actually really important to the underpinnings of the Internet. To understand why, you have to think about all of the bad things that happen on the internet every day — from really bad things like neo-nazi genocide organizing (I am writing this as someone whose great grandfather was murdered for being a Jew) and child exploitation, all the way to marginally or arguably not-so-bad things like, “I don’t like what this person wrote on this website and I want it taken down”.
So, from the perspective of someone operating internet infrastructure, you are constantly bombarded with requests to take down things that people don’t like, for one reason or another. This is unsustainable for two reasons: 1) the pure scale of it, especially for larger properties handling millions or billions (or trillions, in the case of Cloudflare) pageviews and 2) platforms are almost always not in the best position to make a just determination about whether a given piece of content is legal or illegal. So the position of most large web platforms has been to delegate decisions about the legality of (user-generated) content to law enforcement, the courts, or other actors “at the edges” who are in the best position to make those determinations.
From the user/customer perspective, if you think about it, you really don’t want your ISP, or DNS provider, or hosting provider making arbitrary decisions about what speech is acceptable and what is not.
To further codify this general approach to handling content, we have something called Section 230 of the Communications Decency Act which grants internet intermediaries limited liability when it comes to handling internet traffic and user-generated content (e.g., the speech of others). Generally speaking (and I am not a lawyer) this means that companies are legally insulated from content that someone else publishes on their platform. If this were not the case, then it would be impossible, from a risk perspective, to operate any website that handled the speech or content of others (think Facebook, Dropbox, GoDaddy, etc). If you needed to be 100% certain that every piece of information that any user published on your platform didn’t violate any laws anywhere, you would simply not let anyone publish anything. Or you’d need to have some very draconian/slow editorial & approval process, so we’d have no Twitter, no Instagram, etc.
Over the years, every time a new wave of bad activity emerges on the web, there is the inevitable battle about who should be responsible for stopping it. This is what the Stop Online Piracy Act (SOPA) of 2011 was about — this would have made internet platforms directly liable for any user-generated content that might have copyright violations in it (as opposed to the current situation where sites must comply with valid takedown notices in order to keep their immunity). This has come up again in 2017 with the introduction of the “Stop Enabling Sex Traffickers Act of 2017” that seeks to limit CDA 230 protections in the name of addressing child exploitation on the internet.
The really hard thing here, whether we’re talking about piracy, or child exploitation, or neo-nazis, is that tailoring a law that addresses those problems without having broader implications for free speech on internet platforms is really hard. And what we don’t want is a world where, rather than an environment of due process, we end up with either platforms making arbitrary, unilateral decisions about the validity of content, or we get the vigilante justice of DDOS attacks knocking websites offline.
Cloudflare has done the hard work of defending due process and freedom of expression online. It’s not easy to do this, and it is often unpopular (depending on who is doing the speaking). But in the end, they decided to drop the Daily Stormer from the Cloudflare platform. In his explanation of why he decided to make this call, Matthew Prince explained it this way, in an email to the Cloudflare team:
“This was my decision. Our terms of service reserve the right for us to terminate users of our network at our sole discretion. My rationale for making this decision was simple: the people behind the Daily Stormer are assholes and I’d had enough.
Let me be clear: this was an arbitrary decision. It was different than what I’d talked talked with our senior team about yesterday. I woke up this morning in a bad mood and decided to kick them off the Internet. I called our legal team and told them what we were going to do. I called our Trust & Safety team and had them stop the service. It was a decision I could make because I’m the CEO of a major Internet infrastructure company.
Having made that decision we now need to talk about why it is so dangerous. I’ll be posting something on our blog later today. Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.”
This is intentionally provocative, and meant to help everyone understand why it’s dangerous to encourage large internet **infrastructure** providers to take editorial control. For while it may seem obvious that this is the right call in this case, there are literally millions of other cases every day which aren’t so clear, and around which we really should be aiming to have due process to guide decisions.
I would encourage you to read the follow-up piece on the Cloudflare blog discussing why they terminated the Daily Stormer – in it Matthew details out all of the kinds of players in the internet infrastructure space, what role they play, and how they impact free speech online.
In all of this, there is an important distinction between what platforms are **legally required** to preemptively take down, and what they are **within their rights** to remove. A tension in the industry is a hesitation to exercise corporate rights to remove content, at the risk of sliding towards a legal regime where platforms have a positive obligation to remove content — this is what introduces the greatest risks to free speech and due process.
Another key point, which is raised in the Cloudflare post, is the different roles played by various types of internet providers. There is a difference between low-level providers like DNS servers, backbone transit providers, etc.; and high-level applications like social networks, marketplaces, and other, more narrowly-focused applications. Generally speaking, the higher up in the stack you go, and the more competition there is at that layer, and the more specific your application or community, the more it makes sense to have community guidelines that limit or direct what kinds of activities can take place on your platform.
Lastly, none of this is to say that platforms don’t and shouldn’t partner with law enforcement and other authorities to remove illegal content and bad actors. This is actually a large part of what platforms do, every day, and it’s critical to the safe functioning of the internet and of social platforms.
But perhaps the big takeaway here is that, as we continue to discuss where enforcement and censorship should take place, we should fall back on the underlying belief that transparency, accountability and due process (and not arbitrary decisions by powerful companies or outside groups) are critical components of any solution.
I love his post actually.
I think he has that right. I also think that it is a dangerous power. And that there are no rules to govern this.
I also know I’ve had enough. Yesterday on avc the trolls were out in full force and after some personal attacks on me cause I wouldn’t engage, I said Fuck it and used the Disqus block mechanism and eliminated 4 of them and have no interest nor intent of ever allowing them back in my life.
Am I creating my own echo chamber. Sure.
Have I had it with asses. Yes.
Action is good as long as we are aware of the nuance of it.
actually your approach is the best one, if/when it works — giving users the ability to tune what they see or don’t see, as opposed to necessarily making centralized judgment calls
we had similar choices discussion back when i was building the first corporate blogs for companies.
the question-What do we do with people who have criticisms? was the discussion over and over again.
now the discussion is what do we do when civility is out the door. and this is more and more relevant as every single brand needs to have a stand on this today whether it be abortion, climate, women’s rights and each will bring out the walking dead and their foul sputter.
Was it obvious? Wasn’t it Matt who before said something like ‘words are not a bomb?’.
As far as the statement been intentionally provocative I don’t think if that is the case (and I doubt it) and it’s the opposite. You definitely don’t want people doing things out of emotion or lack of rationality and you certainly don’t want (and Matt has discussed this as well) people making decisions by doing what the crowd that yells the loudest demands. Asshole is definitely not a reason to ban someone.
I say this without ever having looked at DailyStormer and having both a father who lived through the camps and grandparents aunts and uncles who died in the camps similar to your family. But the way I understand it what they are doing is protected speech. Would a movie or a play using the same words be considered dangerous in any way? A video game?
Also, and this is important, Cloudflare is not critical to keeping the site up despite the way this situation has been presented. They offer a product that protects and is helpful to this particular site and others. Not used or even needed by what I would call ‘most sites’. And there action is vastly different from google domains or any registrar taking the site offline or de-indexing or whatever. The domain was just transferred into google and as such per ICANN it can’t be transferred out for another 60 days. So it’s dead right there. No practical way around that other than using another domains which is as you know actually super easy. (Plus would be trivial to inform the world of the new site, right?)
Lastly, the site was (as I pointed out on AVC yesterday) already on client hold from the day before because of google domains action. So Cloudflare didn’t have to do anything. They could have simply said ‘the site is on client hold and it is not working’ and turfed this. There would be a small amount of people that could access as a result of caching but that is of little significance. Words on their part would get around that easily.
Domain Name: DAILYSTORMER.COM
Registry Domain ID: 1787753602_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.google.com
Registrar URL: http://domains.google.com
Updated Date: 2017-08-15T00:30:23Z
Creation Date: 2013-03-20T22:43:18Z
Registry Expiry Date: 2020-03-20T22:43:18Z
Registrar: Google Inc.
Registrar IANA ID: 895
Registrar Abuse Contact Email: registrar-abuse@google.com
Registrar Abuse Contact Phone: +1.8772376466
Domain Status: clientHold https://icann.org/epp#clientHold
Name Server: JEAN.NS.CLOUDFLARE.COM
Name Server: KIRK.NS.CLOUDFLARE.COM
DNSSEC: unsigned
URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2017-08-17T17:50:55Z <<<
(See updated date/status above..)
So what we have is an emotional decision that ended up potentially creating more of a future problem and precedence for Cloudflare. And that is at least one reason why you don't want to make decisions when you are under emotional stress and pressure.
What I would do? I would have taken it down. But my T&C clearly states we can do whatever we want and if you don't like that go use a competitor.
yes, those are all arguments that cloudflare laid on in both posts — they don’t want to be making judgment calls, the site won’t disappear if they stop protecting them (though in practice that isn’t true, as they have now been DDOS’d offline.
as for being intentionally provocative, what I was referring to was matthew’s statement about the fact that he was admitting that what happened here was a dangerous / bad precedent. and that we shouldn’t want people like him to have the power to make those kinds of decisions
my understanding of the real tipping point is that the daily stormer people were claiming that cloudflare was an ally to the cause, and that’s why they had not dropped them yet
I have a lot to say about this based on my 35 years of observing what companies do and what ends up happening. As well as obviously personal experience.
That said it’s clear that any company of this size and more importantly ‘prominence’ needs some kind of a crisis plan ‘crash cart’ that deals with any out of bands situation that happens. And that plan has to include people that are cool and unemotional about the events and can think rationally and not do something that they might later regret. Which means it can’t be weighted down with investors or company employees or other people who are wrapped up in the day to day. You know the person who helps you when you have a heart attack is not your father or your child it’s someone who can think clearly and execute and not over-react.
Do I know how people do the wrong thing when emotional? Of course I do and it happens literally every day when I act as an intermediary and it’s one of the ways I manipulate the outcome. And prevent people from making mistakes as well.
“Do I know how people do the wrong thing when emotional? Of course I do and it happens literally every day when I act as an intermediary and it’s one of the ways I manipulate the outcome.”
too funny
A lot of great points here. Understanding the different roles played by different types of provider is crucial. In my view the distinction that matters is between companies that play an active (even if ‘neutral’ and automated) role in selecting content that is displayed to users, and dumb pipes. Social media companies and search engines can’t help but play a role in managing the huge volume of content presented to users. That’s their business. Once they’re in that game, they have realized they need robust policies that recognize the wider social consequences of their decisions. On the other hand, a firm like Dropbox, say – I don’t know enough about Cloudflare to comment – can claim to be genuinely neutral, a simple conduit. Facebook and Google can’t.
From there it gets tricky. There’s no ‘protected speech’ on Facebook – the 1st Amendment only refers to our right not to have our speech infringed by the state. Facebook can make up whatever rules it likes, and does. But as you say, the social and ethical implications of those rules are too great to be left to private companies alone. Equally we hardly want the courts crawling all over social media companies’ terms of use.
In my view we need a different kind of regulatory model for these companies, that ensures that companies with a crucial role in information dissemination have appropriate policies that recognize the benefit of openness and free speech as well as responsibility for acting on illegal and potential harmful content. Those policies should be developed transparently with expert external input and monitoring. That could be regulated for without making internet platforms liable for every piece of content they carry.
I think you are on the right track thinking through the distinction between more “active” platforms like social networks and market places and more “passive” platforms like ISPs and networks — this is what matthew has been getting at in his posts
It gets tricky when you switch between what platforms are within their rights to do (community guidelines, content policies, etc) and what they are legally required to do. A lot of the bad law around internet companies stems from the idea that platforms have a positive legal right to police and remove content that violates various rules, regulations, laws. This is where it gets a lot more tricky, because once the platforms have legal liability around this, you open the doors to massive censorship as platforms limit their liability to UGC content
Thanks. Agreed, and the measure of success that bad law tries to achieve – that *all* instances of illegal content are removed, in relevant jurisdictions, instantly, or even before they’re posted – drives wrong and costly incentives. The right measure of success is that platforms take reasonable steps to act on illegal content in a timely fashion. But the definition and monitoring of ‘reasonable’ and ‘timely’ shouldn’t be a matter for the platforms themselves, because it gives them a power I don’t believe they want or should have, and also because no such steps, no matter how carefully considered and implemented, will be enough to please everybody.
yeah, that’s right
and in general this is the approach (someone notifies a platform of something and they respond)
but you are right that the way that response / back and forth happens really matters. i wonder what standards can be created there. in some cases (such as emergent child issues or users in crisis/self harm situations) time is really of the essence
great q. Having legislators stand up and proclaim outrage that unpalatable (or illegal) content is available is a pretty poor way of creating standards, that much we know, although it’s often the approach that gets most traction.