For the past few weeks, I've been following the FBI / Apple phone unlocking case, and digging deep into the debate around encryption, security and privacy. This debate is as old as the sun, and the exact same arguments we're going through now were fought through 20 years ago during the first crypto wars and the US government's effort to deploy the Clipper Chip as a way of sharing crypto keys between industry and government. The stance of the tech industry has always been "strong crypto or else, because Math" and the stance of the government has been "come on guys, let's figure something out here". At USV, we've been trying to look at this round of the fight with fresh eyes, to the extent possible. What we've been wondering is: is there something different this time around?[1] Has anything changed that might make us reconsider these dug-in, partisan-esque positions? Are there unintended consequences that the tech industry hasn't been considering? To paraphrase my colleague's arguments,
For the past few weeks, I've been following the FBI / Apple phone unlocking case, and digging deep into the debate around encryption, security and privacy. This debate is as old as the sun, and the exact same arguments we're going through now were fought through 20 years ago during the first crypto wars and the US government's effort to deploy the Clipper Chip as a way of sharing crypto keys between industry and government. The stance of the tech industry has always been "strong crypto or else, because Math" and the stance of the government has been "come on guys, let's figure something out here". At USV, we've been trying to look at this round of the fight with fresh eyes, to the extent possible. What we've been wondering is: is there something different this time around?[1] Has anything changed that might make us reconsider these dug-in, partisan-esque positions? Are there unintended consequences that the tech industry hasn't been considering? To paraphrase my colleague's arguments,
that trust, safety and security are serious issues within and around web platforms, and platform operators do have a civic duty to cooperate with law enforcement when it's necessary and lawful (on the surface this is not controversial -- it all depends on the whys and hows). Albert
where information and knowledge are locked up, rather than an open society that benefits from collective intelligence and open knowledge. The part I really want to dig into is an apparent parallel here between data security and
. With DRM, there's been a 30 year battle to lock down the entire software and hardware ecosystem in the name of controlling access to content. Internet / free culture advocates have
that the more enlightened approach is to understand that information wants to be free, and we can all be better off if adapt our culture, expectations, and business models to a world where remixing is allowed. Now, as we look at data security and privacy, I feel a lot of those same forces coming to bear: in the name of data security and privacy, we need to all get on board with a controlled software / hardware model where companies, rather than users themselves, control data flows. This is best exemplified by Apple's security model, which stores encryption keys in a separate "
" and only allows software to be installed that's signed by Apple -- conforming not only to their security policies but to their control policies. This, I think, is where some of us have gotten uncomfortable. What we don't want is the cause of security and privacy to lead us down the path to
, the way that DRM has. A risk here seems that many of the folks who are fighting for copyright reform & device unlocking, may also be unwittingly undermining those same causes in the crypto/privacy/security fight. So what I've been trying to do is
parse apart the issues of security and control
. Can we have one without the other, and can we talk about them, and advocate for (or against) them separately? (And, for bonus points, can we find ways to have both security and access to knowledge -- for example as secure data processing projects such as
that comes from centralized app stores. For example: one of our portfolio companies recently realized that by shifting from an app-store model to an API-based model, they could increase their product iterations by 1000% -- shipping new code instantly, rather than waiting weeks for app store approval. This is the kind of innovation we want, and it's just not possible with the controlled app store model. It's also important for other kinds of security -- specifically, the
, and will be increasingly important as more Internet of Things devices do more things with more data. If we move towards a world of DRM-style data lockdown, we'll have less knowledge of how products work and less control over our information. This has been a long post, so I'll just summarize by saying: I think it would do everyone good to keep looking at the encryption issue not simply through the lens of privacy and security, but also through the lens of openness and innovation, and make sure that whatever policies and technologies we support coming out of this strike the best possible balance. --
[1] the best resources from the academic community on the subject are
, a Berkman Center report pointing out the extent to which the "going dark" framing is misleading, since the overall surface area for digital surveillance has grown dramatically at the same time that strong encryption has made some data inaccessible.
Earlier this week, I was at SXSW for CTA's annual Innovation Policy Day. My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA's policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy. But, that's not the point of this post! The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things. The idea that struck me the most from that session was the tension -- or depending on how you look at it, codependence -- between the "freedom to innovate" and the "freedom to investigate". Defending the Freedom to Innovate was the Mercatus Center's Adam Thierer. Adam is one of the most thoughtful folks looking at innovation from a libertarian perspective, and is the author of a book on the subject of permissionless innovation. The gist of permissionless innovation is that we -- as a society and as a market -- need the ability to experiment. To try new things freely, make mistakes, take risks, and -- most importantly -- learn from the entire process. Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules. This is the foundation of what i call Regulation 2.0[1]. Repping the Freedom to Investigate was Keith Winstein from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream). Keith has been exploring the idea of the "Freedom to Investigate", or, as he put it in this recent piece in Politico, "the right to eavesdrop on your things". In other words, if we are to trust the various devices and services we use, we must have a right to inspect them -- to "audit" what they are saying about us. In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them. Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market. The question I asked was: are these two ideas in tension, or are they complimentary? Adam gave a good answer, which was essentially: They are complimentary -- we want to innovate, and we also need this kind of transparency to make the market work. But... there are limits to the forms of transparency we can force on private companies -- lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc. And Kevin seemed to agree with that general sentiment. On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in Regulation the Internet Way). Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate. Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate! So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)? Realistically speaking, we can't have the freedom to innovate without some form of the freedom to investigate. The tricky bit comes when we try to implement that in practice. How do we design such systems? What is the lightest-weight, least heavy-handed approach? Where can this be experimented with using technology and the market, rather than through a legal or policy lever? These are the questions. [1] Close readers / critics will observe an apparent tension between a "regulation 2.0" approach and policies such as Net eutrality, which I also favor. Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about -- in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming. The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.
Since 2006, I've been writing here about cities, the internet, and the ongoing collision between the two. Along the way, I've also loved using Tumblr to clip quotes off the web, building on the idea of "the slow hunch" (the title of this blog) and the "open commonplace book" as a tool for tracking the slow hunch over time. Today, I'm launching the next iteration of both: Internet Meets World. On IMW, I'll be tracking the big questions, like:
I'll still continue to blog here, but will syndicate certain posts -- those specifically digging into the macro / legal / policy / societal issues created by the collision of the internet and the world, on IMW. In addition to collecting my own posts, I'll also be collecting other articles from across the web, and will move my quote clipping from tumblr into Medium. I'm also looking for one or more co-editors for IMW. If you're interested, shoot me an email at nick [at] usv [dot] com, including a handful of links / quotes that you think really capture the essence of this conflict / opportunity. Onward!
that trust, safety and security are serious issues within and around web platforms, and platform operators do have a civic duty to cooperate with law enforcement when it's necessary and lawful (on the surface this is not controversial -- it all depends on the whys and hows). Albert
where information and knowledge are locked up, rather than an open society that benefits from collective intelligence and open knowledge. The part I really want to dig into is an apparent parallel here between data security and
. With DRM, there's been a 30 year battle to lock down the entire software and hardware ecosystem in the name of controlling access to content. Internet / free culture advocates have
that the more enlightened approach is to understand that information wants to be free, and we can all be better off if adapt our culture, expectations, and business models to a world where remixing is allowed. Now, as we look at data security and privacy, I feel a lot of those same forces coming to bear: in the name of data security and privacy, we need to all get on board with a controlled software / hardware model where companies, rather than users themselves, control data flows. This is best exemplified by Apple's security model, which stores encryption keys in a separate "
" and only allows software to be installed that's signed by Apple -- conforming not only to their security policies but to their control policies. This, I think, is where some of us have gotten uncomfortable. What we don't want is the cause of security and privacy to lead us down the path to
, the way that DRM has. A risk here seems that many of the folks who are fighting for copyright reform & device unlocking, may also be unwittingly undermining those same causes in the crypto/privacy/security fight. So what I've been trying to do is
parse apart the issues of security and control
. Can we have one without the other, and can we talk about them, and advocate for (or against) them separately? (And, for bonus points, can we find ways to have both security and access to knowledge -- for example as secure data processing projects such as
that comes from centralized app stores. For example: one of our portfolio companies recently realized that by shifting from an app-store model to an API-based model, they could increase their product iterations by 1000% -- shipping new code instantly, rather than waiting weeks for app store approval. This is the kind of innovation we want, and it's just not possible with the controlled app store model. It's also important for other kinds of security -- specifically, the
, and will be increasingly important as more Internet of Things devices do more things with more data. If we move towards a world of DRM-style data lockdown, we'll have less knowledge of how products work and less control over our information. This has been a long post, so I'll just summarize by saying: I think it would do everyone good to keep looking at the encryption issue not simply through the lens of privacy and security, but also through the lens of openness and innovation, and make sure that whatever policies and technologies we support coming out of this strike the best possible balance. --
[1] the best resources from the academic community on the subject are
, a Berkman Center report pointing out the extent to which the "going dark" framing is misleading, since the overall surface area for digital surveillance has grown dramatically at the same time that strong encryption has made some data inaccessible.
Earlier this week, I was at SXSW for CTA's annual Innovation Policy Day. My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA's policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy. But, that's not the point of this post! The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things. The idea that struck me the most from that session was the tension -- or depending on how you look at it, codependence -- between the "freedom to innovate" and the "freedom to investigate". Defending the Freedom to Innovate was the Mercatus Center's Adam Thierer. Adam is one of the most thoughtful folks looking at innovation from a libertarian perspective, and is the author of a book on the subject of permissionless innovation. The gist of permissionless innovation is that we -- as a society and as a market -- need the ability to experiment. To try new things freely, make mistakes, take risks, and -- most importantly -- learn from the entire process. Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules. This is the foundation of what i call Regulation 2.0[1]. Repping the Freedom to Investigate was Keith Winstein from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream). Keith has been exploring the idea of the "Freedom to Investigate", or, as he put it in this recent piece in Politico, "the right to eavesdrop on your things". In other words, if we are to trust the various devices and services we use, we must have a right to inspect them -- to "audit" what they are saying about us. In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them. Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market. The question I asked was: are these two ideas in tension, or are they complimentary? Adam gave a good answer, which was essentially: They are complimentary -- we want to innovate, and we also need this kind of transparency to make the market work. But... there are limits to the forms of transparency we can force on private companies -- lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc. And Kevin seemed to agree with that general sentiment. On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in Regulation the Internet Way). Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate. Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate! So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)? Realistically speaking, we can't have the freedom to innovate without some form of the freedom to investigate. The tricky bit comes when we try to implement that in practice. How do we design such systems? What is the lightest-weight, least heavy-handed approach? Where can this be experimented with using technology and the market, rather than through a legal or policy lever? These are the questions. [1] Close readers / critics will observe an apparent tension between a "regulation 2.0" approach and policies such as Net eutrality, which I also favor. Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about -- in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming. The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.
Since 2006, I've been writing here about cities, the internet, and the ongoing collision between the two. Along the way, I've also loved using Tumblr to clip quotes off the web, building on the idea of "the slow hunch" (the title of this blog) and the "open commonplace book" as a tool for tracking the slow hunch over time. Today, I'm launching the next iteration of both: Internet Meets World. On IMW, I'll be tracking the big questions, like:
I'll still continue to blog here, but will syndicate certain posts -- those specifically digging into the macro / legal / policy / societal issues created by the collision of the internet and the world, on IMW. In addition to collecting my own posts, I'll also be collecting other articles from across the web, and will move my quote clipping from tumblr into Medium. I'm also looking for one or more co-editors for IMW. If you're interested, shoot me an email at nick [at] usv [dot] com, including a handful of links / quotes that you think really capture the essence of this conflict / opportunity. Onward!