The dangers of unstoppable code

With real-time, interconnected, self-executing systems, sometimes when things wrong, they go really wrong.  I wrote about this general idea previously here.

Yesterday, while I was writing my post on Trusted Brands, I was doing a little searching through my blog archives, so as to link back to all the posts categorized under “Trust”.  In the process of doing that, I went back and actually re-categorized some older posts that fell under that category, but weren’t appropriately marked.  In the process of doing that, I came across a whole bunch of posts from 2013 that I had imported from my old Tumblr blog, but were still just saved as drafts rather than published posts.

So, I did a little test with one of them — hit Publish and checked that it looked right.  Then, after that, I did a bulk-edit with about 15 posts, selecting all of them and changing the status from “draft” to “published”.

This did not have the intended effect.

Rather than those posts showing up in the archives under 2013, they were published as of yesterday.  So now I have 15 posts from 2013 showing up at the top of the blog as if I wrote them yesterday.  

That would not have been a real problem on its own — the real problem stemmed that because of our automated “content system” (that I built, mind you) within the USV team, those posts didn’t just show up on my blog, they showed up on the USV Team Posts widget (displayed here, on Fred’s blog and on Albert’s blog), they showed (via the widget) in Fred’s RSS feed, which feeds his daily newsletter, and blast notifications were sent out via the USV network slack.  Further, some elements of the system (namely, the consolidated USV team RSS feed, which is powered by Zapier) is not easily changeable.  

Because of the way this happens to be set up, all of those triggers happen automatically and in real-time.  As Jamie Wilkinson remarked to me this morning, it is unclear whether this is a feature of a bug.   

Of course, as all of this happened, I was on a plane to SF with spotty internet, and was left trying to undo the mess, restore things to a previous point, monkey patch where needed, etc.  

Point is: real-time automation is really nice, when it works as intended.  Every day for the past few years, posts have been flowing through this same system, and it’s been great. No fuss, no muss, everything flowing where it should.

But as this (admittedly very minor) incident shows, real-time, automatic, interconnected systems carry a certain type of failure risk.  In this particular case, there are a few common sense safeguards we could build in to protect against something like this (namely: a delay in the consolidated RSS feed in picking up posts, and/or an easy way to edit it post-hoc) — maybe we will get to those.

But I also think about this in the world of crypto/blockchain and smart contracts, where a key feature of the code is that it is automatic and “unstoppable”.  We have already seen some high-profile cases where unstoppable code-as-law can lead to some tough situations (DAO hack, ETH/ETC hard fork, etc), and will surely see more.

There is a lot of power and value in automated, unstoppable, autonomous code. But it does absolutely bring with it a new kind of risk, and will require both a new programming approach (less iterative, more careful) and also new tools for governance, security, etc (along the lines of what the teams at Zeppelin, Aragon and Decred are building).

8 comments on “The dangers of unstoppable code”

Standard stuff with software: Since it will do what you request, have to be careful in what you request.

For the issues of time/date stamps and undoing work, sure, it happens that that is just what I was working on last night for system administration of the server for my startup. E.g., at least some versions of Microsoft’s XCOPY copy the files with their existing, current time/date stamps but put the time/date stamps on the copies of the directories the time/date of the run of XCOPY. There is a fix with Robocopy. For old XCOPY results, there are two functions in Rexx that will help me correct the results. For this need to know some about how the Microsoft NTFS file system does a case of inheritance of time/date stamps.

Sure, want good backups so that can undo disasters from effects of poorly designed and/or poorly documented tools.

‘be careful what you wish for’ has never been more germane.

the idea of autocode commanding autonomous robots is a truly scary prospect.

“Less code is the best code.”

Automated code will always be problematic. I cannot see it ever being stable. Every top engineer and professor I’ve talked to has told me that this is a core fundamental of engineering.

So I wonder, is automated code foolish or revolutionary?

Seems to me that there will have to be safeguards and escape hatches of some kind

In some ways these safeguards also open up the potential for manipulation, which is a driving idea behind autonomous, auditable code in the crypto space

Safeguards also could centralize things as well. Who makes the decision to implement or act upon an instance?

Centralization is something we must try to avoid…

But having said that, I’ve been spending the majority of my time focusing on UX in the crypto space, talking to other UX teams and it appears as though some sort of centralization is coming in order for the ecosystem (web3 apps, dapps) to gain mass adoption.

I feel, transparency is key for applications of the future. “…auditable code…”, clear opt-in, etc. will all help “users” make informed decisions on their data and actions on the web. Now that the blindfold is off with data privacy, for most, we must take the opportunity to give users the tools they need to opt in or opt out of things to allow for a wide spectrum of control of their data. From complete control with significant consequences if mishandled to less control but better UX.

I think that “less iterative, more careful” is in fact and old programming approach. Web development brought the iterative and (almost) careless programmer mindset because we could see results instantly. A few years later we would expect instant results from everything and have APIS for everything. Very few people check what is going on behind the apis and services we use.

Exhaustive and boring and good testing is the way to go. The problem is that it takes time and.. it is boring. The good part is that most of it can be automated once it is built.

The real question is what happens when other, independent actors are running their own unstoppable code that takes the output of your unstoppable code, and use it as input. Which is exactly what much ethereum and other “smart contract” platform code is designed to do.

It’s like something lifted straight out of a Kurt Vonnegut novel: the world ends in spectacular nuclear fireball b/c the GM of USV accidentally publishes a blog post with the wrong timestamp, setting off a long chain of events in which a nuclear technician reading AVC does a double-take while eating his sandwich in the operations center, spilling ketchup on the controls, creating a short-circuit that launches an ICBM towards Moscow and setting off a doomsday scenario.

On the bright side, it does show that despite @jasonpwright:disqus’s comment, perhaps there was a time that unstoppable code could have been even scarier!

Comments are closed.