From Crypto-Native to Crypto-Enabled
I’m not one to make big annual predictions, but one thing that seems likely to me is that 2024 will mark the emergence of mainstream apps powered by ...

Bitcoin as Battery
One of my favorite things about crypto is that, every so often, your conception of what it is changes.Bitcoin at first was "weird internet money...

The Internet's Next Business Model: A Conversation with Cloudflare's Matthew Prince
I just released a new episode of The Slow Hunch with Matthew Prince, CEO and co-founder of Cloudflare. Since we invested in their Series C back in 2013, I've watched Matthew and his team build one of the most critical pieces of internet infrastructure—protecting and accelerating vast portions of global web traffic. Our conversation traces Matthew's journey from his early "slow hunch" that the internet was fundamentally broken and needed fixing. We start with his law school days in 2000, when ...

Subscribe to The Slow Hunch by Nick Grossman
Investing @ USV. Student of cities and the internet.
From Crypto-Native to Crypto-Enabled
I’m not one to make big annual predictions, but one thing that seems likely to me is that 2024 will mark the emergence of mainstream apps powered by ...

Bitcoin as Battery
One of my favorite things about crypto is that, every so often, your conception of what it is changes.Bitcoin at first was "weird internet money...

The Internet's Next Business Model: A Conversation with Cloudflare's Matthew Prince
I just released a new episode of The Slow Hunch with Matthew Prince, CEO and co-founder of Cloudflare. Since we invested in their Series C back in 2013, I've watched Matthew and his team build one of the most critical pieces of internet infrastructure—protecting and accelerating vast portions of global web traffic. Our conversation traces Matthew's journey from his early "slow hunch" that the internet was fundamentally broken and needed fixing. We start with his law school days in 2000, when ...
>1.2K subscribers
>1.2K subscribers
Share Dialog
Share Dialog
With real-time, interconnected, self-executing systems, sometimes when things wrong, they go really wrong. I wrote about this general idea previously here.
Yesterday, while I was writing my post on Trusted Brands, I was doing a little searching through my blog archives, so as to link back to all the posts categorized under "Trust". In the process of doing that, I went back and actually re-categorized some older posts that fell under that category, but weren't appropriately marked. In the process of doing that, I came across a whole bunch of posts from 2013 that I had imported from my old Tumblr blog, but were still just saved as drafts rather than published posts.
So, I did a little test with one of them -- hit Publish and checked that it looked right. Then, after that, I did a bulk-edit with about 15 posts, selecting all of them and changing the status from "draft" to "published".
This did not have the intended effect.
Rather than those posts showing up in the archives under 2013, they were published as of yesterday. So now I have 15 posts from 2013 showing up at the top of the blog as if I wrote them yesterday.
That would not have been a real problem on its own -- the real problem stemmed that because of our automated "content system" (that I built, mind you) within the USV team, those posts didn't just show up on my blog, they showed up on the USV Team Posts widget (displayed here, on Fred's blog and on Albert's blog), they showed (via the widget) in Fred's RSS feed, which feeds his daily newsletter, and blast notifications were sent out via the USV network slack. Further, some elements of the system (namely, the consolidated USV team RSS feed, which is powered by Zapier) is not easily changeable.
Because of the way this happens to be set up, all of those triggers happen automatically and in real-time. As Jamie Wilkinson remarked to me this morning, it is unclear whether this is a feature of a bug.
Of course, as all of this happened, I was on a plane to SF with spotty internet, and was left trying to undo the mess, restore things to a previous point, monkey patch where needed, etc.
Point is: real-time automation is really nice, when it works as intended. Every day for the past few years, posts have been flowing through this same system, and it's been great. No fuss, no muss, everything flowing where it should.
But as this (admittedly very minor) incident shows, real-time, automatic, interconnected systems carry a certain type of failure risk. In this particular case, there are a few common sense safeguards we could build in to protect against something like this (namely: a delay in the consolidated RSS feed in picking up posts, and/or an easy way to edit it post-hoc) -- maybe we will get to those.
But I also think about this in the world of crypto/blockchain and smart contracts, where a key feature of the code is that it is automatic and "unstoppable". We have already seen some high-profile cases where unstoppable code-as-law can lead to some tough situations (DAO hack, ETH/ETC hard fork, etc), and will surely see more.
There is a lot of power and value in automated, unstoppable, autonomous code. But it does absolutely bring with it a new kind of risk, and will require both a new programming approach (less iterative, more careful) and also new tools for governance, security, etc (along the lines of what the teams at Zeppelin, Aragon and Decred are building).
With real-time, interconnected, self-executing systems, sometimes when things wrong, they go really wrong. I wrote about this general idea previously here.
Yesterday, while I was writing my post on Trusted Brands, I was doing a little searching through my blog archives, so as to link back to all the posts categorized under "Trust". In the process of doing that, I went back and actually re-categorized some older posts that fell under that category, but weren't appropriately marked. In the process of doing that, I came across a whole bunch of posts from 2013 that I had imported from my old Tumblr blog, but were still just saved as drafts rather than published posts.
So, I did a little test with one of them -- hit Publish and checked that it looked right. Then, after that, I did a bulk-edit with about 15 posts, selecting all of them and changing the status from "draft" to "published".
This did not have the intended effect.
Rather than those posts showing up in the archives under 2013, they were published as of yesterday. So now I have 15 posts from 2013 showing up at the top of the blog as if I wrote them yesterday.
That would not have been a real problem on its own -- the real problem stemmed that because of our automated "content system" (that I built, mind you) within the USV team, those posts didn't just show up on my blog, they showed up on the USV Team Posts widget (displayed here, on Fred's blog and on Albert's blog), they showed (via the widget) in Fred's RSS feed, which feeds his daily newsletter, and blast notifications were sent out via the USV network slack. Further, some elements of the system (namely, the consolidated USV team RSS feed, which is powered by Zapier) is not easily changeable.
Because of the way this happens to be set up, all of those triggers happen automatically and in real-time. As Jamie Wilkinson remarked to me this morning, it is unclear whether this is a feature of a bug.
Of course, as all of this happened, I was on a plane to SF with spotty internet, and was left trying to undo the mess, restore things to a previous point, monkey patch where needed, etc.
Point is: real-time automation is really nice, when it works as intended. Every day for the past few years, posts have been flowing through this same system, and it's been great. No fuss, no muss, everything flowing where it should.
But as this (admittedly very minor) incident shows, real-time, automatic, interconnected systems carry a certain type of failure risk. In this particular case, there are a few common sense safeguards we could build in to protect against something like this (namely: a delay in the consolidated RSS feed in picking up posts, and/or an easy way to edit it post-hoc) -- maybe we will get to those.
But I also think about this in the world of crypto/blockchain and smart contracts, where a key feature of the code is that it is automatic and "unstoppable". We have already seen some high-profile cases where unstoppable code-as-law can lead to some tough situations (DAO hack, ETH/ETC hard fork, etc), and will surely see more.
There is a lot of power and value in automated, unstoppable, autonomous code. But it does absolutely bring with it a new kind of risk, and will require both a new programming approach (less iterative, more careful) and also new tools for governance, security, etc (along the lines of what the teams at Zeppelin, Aragon and Decred are building).
No activity yet