
The Butter Thesis
At USV, we talk a lot about our investment thesis. The USV thesis is a set of ideas that has guided our investing over the years. It is a tool we u...
From Crypto-Native to Crypto-Enabled
I’m not one to make big annual predictions, but one thing that seems likely to me is that 2024 will mark the emergence of mainstream apps powered by ...
You Never Know When You've Had a Good Day
Many years ago, when I had just started working at USV, I remember there was kind of a complicated situation that unfolded in a seemingly bad way, and I'll never forget what Brad said in response. He said:you never know when you've had a good dayI didn't really understand what that meant, so he told me a story that went something like: back around the year 2000 at the height of the dot-com boom, there was a guy who was a senior exec at a successful startup. That person had a falling out with ...

The Butter Thesis
At USV, we talk a lot about our investment thesis. The USV thesis is a set of ideas that has guided our investing over the years. It is a tool we u...
From Crypto-Native to Crypto-Enabled
I’m not one to make big annual predictions, but one thing that seems likely to me is that 2024 will mark the emergence of mainstream apps powered by ...
You Never Know When You've Had a Good Day
Many years ago, when I had just started working at USV, I remember there was kind of a complicated situation that unfolded in a seemingly bad way, and I'll never forget what Brad said in response. He said:you never know when you've had a good dayI didn't really understand what that meant, so he told me a story that went something like: back around the year 2000 at the height of the dot-com boom, there was a guy who was a senior exec at a successful startup. That person had a falling out with ...
Share Dialog
Share Dialog
I've been struck recently by the power and surprise of unintended consequences. For example, a recent Slate article digs into flip side of the life-saving potential of automated vehicles: our reliance on car crash deaths for organ donors:
"An estimated 94 percent of motor-vehicle accidents involve some kind of a driver error. As the number of vehicles with human operators falls, so too will the preventable fatalities. In June, Christopher A. Hart, the chairman of the National Transportation Safety Board, said, “Driverless cars could save many if not most of the 32,000 lives that are lost every year on our streets and highways.” Even if self-driving cars only realize a fraction of their projected safety benefits, a decline in the number of available organs could begin as soon as the first wave of autonomous and semiautonomous vehicles hits the road—threatening to compound our nation’s already serious shortages." [#]
Or, with gene editing, what if we are successful at eradicating illness and preserving life forever? What new challenges will that present? How will we eat? How will we not consume all of earth's natural resources? Or perhaps the life-saving potential will ultimately be canceled out by the life-harming potential -- it's clearly just as possible to use gene editing to weaponize mosquitos as it is to sterilize them. Or, with the democratization of media -- on the one hand radically increasing freedom of expression, but also laying the foundation for the "fake news" problem. I don't think anyone who believed in the power of social networks to enable free speech and political organizing online really saw that coming, and it's a real, hard problem. Or, with artificial intelligence -- how do we avoid being blinded by the shiny newness of helpful automation while ignoring potential existential threats? Bill Gates on that subject:
"I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned." [#]
All of these consequences are made more serious by the fact that in a connected world, change can take place very very quickly, and it can be hard, or impossible to manage or roll back. A single person in a single place now has more power to impact the world (the whole world!) than ever before. As Kevin Esvelt, the geneticist who is the subject of the New Yorker article linked above said: "as a single scientist, I can alter an organism in a laboratory that will have more of an effect on all your lives than anything the legislature across the river can do." [#] So what to do? These kinds of changes are coming (seemingly) faster than ever. I like Esvelt's suggestion that, in the case of gene editing, we should be building "undo" functionality into anything we deploy:
"With CRISPR and gene-drive technology, it might be possible for just one engineered mosquito, or fly, or any other animal or seed, to eventually change the fundamental genetics of an entire species. As Esvelt puts it, “A release anywhere could be a release everywhere.” Recognizing the possibility of an irreversible error, however, he and Church, in their earliest experiments, began to build drives capable of restoring any DNA that had been removed. Both say that if an edit cannot be corrected it should not be attempted. They also suggest retaining, in its original form, some part of any population that has been edited—a kind of molecular Noah’s Ark." [#]
That's one approach that seems reasonably and will hopefully be effective, at least in some cases. But for most of what we're doing there is no natural "undo" function, so we must think about other ways to manage, or at the very least, quantify and understand, the consequences of what we're making.
I've been struck recently by the power and surprise of unintended consequences. For example, a recent Slate article digs into flip side of the life-saving potential of automated vehicles: our reliance on car crash deaths for organ donors:
"An estimated 94 percent of motor-vehicle accidents involve some kind of a driver error. As the number of vehicles with human operators falls, so too will the preventable fatalities. In June, Christopher A. Hart, the chairman of the National Transportation Safety Board, said, “Driverless cars could save many if not most of the 32,000 lives that are lost every year on our streets and highways.” Even if self-driving cars only realize a fraction of their projected safety benefits, a decline in the number of available organs could begin as soon as the first wave of autonomous and semiautonomous vehicles hits the road—threatening to compound our nation’s already serious shortages." [#]
Or, with gene editing, what if we are successful at eradicating illness and preserving life forever? What new challenges will that present? How will we eat? How will we not consume all of earth's natural resources? Or perhaps the life-saving potential will ultimately be canceled out by the life-harming potential -- it's clearly just as possible to use gene editing to weaponize mosquitos as it is to sterilize them. Or, with the democratization of media -- on the one hand radically increasing freedom of expression, but also laying the foundation for the "fake news" problem. I don't think anyone who believed in the power of social networks to enable free speech and political organizing online really saw that coming, and it's a real, hard problem. Or, with artificial intelligence -- how do we avoid being blinded by the shiny newness of helpful automation while ignoring potential existential threats? Bill Gates on that subject:
"I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned." [#]
All of these consequences are made more serious by the fact that in a connected world, change can take place very very quickly, and it can be hard, or impossible to manage or roll back. A single person in a single place now has more power to impact the world (the whole world!) than ever before. As Kevin Esvelt, the geneticist who is the subject of the New Yorker article linked above said: "as a single scientist, I can alter an organism in a laboratory that will have more of an effect on all your lives than anything the legislature across the river can do." [#] So what to do? These kinds of changes are coming (seemingly) faster than ever. I like Esvelt's suggestion that, in the case of gene editing, we should be building "undo" functionality into anything we deploy:
"With CRISPR and gene-drive technology, it might be possible for just one engineered mosquito, or fly, or any other animal or seed, to eventually change the fundamental genetics of an entire species. As Esvelt puts it, “A release anywhere could be a release everywhere.” Recognizing the possibility of an irreversible error, however, he and Church, in their earliest experiments, began to build drives capable of restoring any DNA that had been removed. Both say that if an edit cannot be corrected it should not be attempted. They also suggest retaining, in its original form, some part of any population that has been edited—a kind of molecular Noah’s Ark." [#]
That's one approach that seems reasonably and will hopefully be effective, at least in some cases. But for most of what we're doing there is no natural "undo" function, so we must think about other ways to manage, or at the very least, quantify and understand, the consequences of what we're making.
No comments yet