Earlier this week, I was at SXSW for CTA's annual Innovation Policy Day. My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA's policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy. But, that's not the point of this post! The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things. The idea that struck me the most from that session was the tension -- or depending on how you look at it, codependence -- between the "freedom to innovate" and the "
Earlier this week, I was at SXSW for CTA's annual Innovation Policy Day. My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA's policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy. But, that's not the point of this post! The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things. The idea that struck me the most from that session was the tension -- or depending on how you look at it, codependence -- between the "freedom to innovate" and the "
. The gist of permissionless innovation is that we -- as a society and as a market -- need the ability to experiment. To try new things freely, make mistakes, take risks, and -- most importantly -- learn from the entire process. Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules. This is the foundation of what i call
from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream). Keith has been exploring the idea of the "Freedom to Investigate", or, as he put it in
, "the right to eavesdrop on your things". In other words, if we are to trust the various devices and services we use, we must have a right to inspect them -- to "audit" what they are saying about us. In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them. Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market. The question I asked was:
are these two ideas in tension, or are they complimentary?
Adam gave a good answer, which was essentially: They are complimentary -- we want to innovate, and we also need this kind of transparency to make the market work. But... there are limits to the forms of transparency we can force on private companies -- lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc. And Kevin seemed to agree with that general sentiment. On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in
). Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate. Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate! So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)? Realistically speaking, we can't have the freedom to innovate without some form of the freedom to investigate. The tricky bit comes when we try to implement that in practice. How do we design such systems? What is the lightest-weight, least heavy-handed approach? Where can this be experimented with using technology and the market, rather than through a legal or policy lever? These are the questions.
[1] Close readers / critics will observe an apparent tension between a "regulation 2.0" approach and policies such as Net eutrality, which
. Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about -- in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming. The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.
It's been a fascinating week to watch the war between Uber and the De Blasio administration play out. Not surprisingly, Uber ended up carrying the day using a combination of its dedicated user base and its sophisticated political machine. This is yet another very early round in what will be a long and hard war -- not just between Uber and NYC, or Uber and other cities, but between every high-growth startup innovating in a regulated sector and every regulator and lawmaker overseeing those sectors. Watching the big battles that have played out so far -- in particular around Uber and Airbnb -- we've seen the same pattern several times over: new startup delivers a creative and delightful new service which breaks the old rules, ignoring those rules until they have critical mass of happy customers; regulators and incumbents respond by trying to shut down the new innovation; startups and their happy users rain hellfire on the regulators; questions arise about the actual impact of the new innovation; a tiny amount of data is shared to settle the dispute. Rinse and repeat, over and over. I am not sure there's a near term alternative to this process -- new ways of doing things will never see the light of day if step 1 is always "ask permission". The answer will nearly always be no, and new ideas won't have a chance to prove themselves. Luckily, though, we have somewhat of a model to follow for a better future. It's the way that these new platforms are regulating themselves. My colleague Brad has long said that web platforms are like governments, and that's becoming clearer by the day (just look at Reddit for the latest chapter). The primary innovation that modern web platforms have created is, essentially, how to regulate, adaptively, at scale. Using tons and tons of real-time data as their primary tool, they've inverted the regulatory model. Rather than seek onerous up-front permission to onboard, users onboard easily, but are then held to strict accountability through the data about their actions:
Contrast this with the traditional regulatory model -- the one government uses to regulate the private sector, and it's the opposite -- regulations focus on up-front permission as the primary tool:
The reason for this makes lots of sense: when today's regulations were designed (largely at the beginning of the progressive era in the early 20th century), we didn't have access to real-time data. So the only feasible approach was to build high barriers to entry. Today, things are different. We have data, lots of it. In the case of the relationship between web platforms (companies) and their users, we are leveraging that data to introduce a regulatory regime of data-driven accountability. Just ask any Uber driver what their chief complaint is, and you'll likely hear that they can get booted off the platform for poor performance, very quickly. Now, the question is: how can we transform our public regulations to adopt this kind of model? Here's the part that no one will like: 1) Regulators need to accept a new model where they focus less on making it hard for people to get started. That means things relaxing licensing requirements (for example, all the states working on Bitcoin licensing right now) and increase the freedom to operate. This is critical for experimentation and innovation. 2) In exchange for that freedom to operate, companies will need to share data with regulators -- un-massaged, and in real time, just like their users do with them. AND, will need to accept that that data may result in forms of accountability. For example, we should give ourselves the opportunity to enjoy the obvious benefits of the Ubers and Airbnbs of the world, but also recognize that Uber could be making NYC traffic worse, and Airbnb could be making SF housing affordability worse. In other words, grant companies the freedoms they grant their users, but also bring the same data-driven accountability:
That is going to be a tough pill to swallow, on both sides, so I'm not sure how we get there. But I believe that if we're honest with ourselves, we will recognize that the approach to regulation that web platforms have brought to their users is an innovation in its own right, and is one that we should aim to apply to the public layer. Over at TechCrunch, Kim-Mai Cutler has been exploring this idea in depth. In her article today, she rightly points out that "Those decisions are tough if no one trusts each other" -- platforms (rightly) don't trust regulators not to instinctively clamp down on new innovations, and regulators don't trust platforms to EITHER play by the existing rules OR provide in-depth data for the sake of accountability. In the meantime, we'll get to observe more battles as the war wages on.
Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described web services as akin to governments, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":
I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.
. The gist of permissionless innovation is that we -- as a society and as a market -- need the ability to experiment. To try new things freely, make mistakes, take risks, and -- most importantly -- learn from the entire process. Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules. This is the foundation of what i call
from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream). Keith has been exploring the idea of the "Freedom to Investigate", or, as he put it in
, "the right to eavesdrop on your things". In other words, if we are to trust the various devices and services we use, we must have a right to inspect them -- to "audit" what they are saying about us. In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them. Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market. The question I asked was:
are these two ideas in tension, or are they complimentary?
Adam gave a good answer, which was essentially: They are complimentary -- we want to innovate, and we also need this kind of transparency to make the market work. But... there are limits to the forms of transparency we can force on private companies -- lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc. And Kevin seemed to agree with that general sentiment. On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in
). Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate. Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate! So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)? Realistically speaking, we can't have the freedom to innovate without some form of the freedom to investigate. The tricky bit comes when we try to implement that in practice. How do we design such systems? What is the lightest-weight, least heavy-handed approach? Where can this be experimented with using technology and the market, rather than through a legal or policy lever? These are the questions.
[1] Close readers / critics will observe an apparent tension between a "regulation 2.0" approach and policies such as Net eutrality, which
. Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about -- in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming. The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.
It's been a fascinating week to watch the war between Uber and the De Blasio administration play out. Not surprisingly, Uber ended up carrying the day using a combination of its dedicated user base and its sophisticated political machine. This is yet another very early round in what will be a long and hard war -- not just between Uber and NYC, or Uber and other cities, but between every high-growth startup innovating in a regulated sector and every regulator and lawmaker overseeing those sectors. Watching the big battles that have played out so far -- in particular around Uber and Airbnb -- we've seen the same pattern several times over: new startup delivers a creative and delightful new service which breaks the old rules, ignoring those rules until they have critical mass of happy customers; regulators and incumbents respond by trying to shut down the new innovation; startups and their happy users rain hellfire on the regulators; questions arise about the actual impact of the new innovation; a tiny amount of data is shared to settle the dispute. Rinse and repeat, over and over. I am not sure there's a near term alternative to this process -- new ways of doing things will never see the light of day if step 1 is always "ask permission". The answer will nearly always be no, and new ideas won't have a chance to prove themselves. Luckily, though, we have somewhat of a model to follow for a better future. It's the way that these new platforms are regulating themselves. My colleague Brad has long said that web platforms are like governments, and that's becoming clearer by the day (just look at Reddit for the latest chapter). The primary innovation that modern web platforms have created is, essentially, how to regulate, adaptively, at scale. Using tons and tons of real-time data as their primary tool, they've inverted the regulatory model. Rather than seek onerous up-front permission to onboard, users onboard easily, but are then held to strict accountability through the data about their actions:
Contrast this with the traditional regulatory model -- the one government uses to regulate the private sector, and it's the opposite -- regulations focus on up-front permission as the primary tool:
The reason for this makes lots of sense: when today's regulations were designed (largely at the beginning of the progressive era in the early 20th century), we didn't have access to real-time data. So the only feasible approach was to build high barriers to entry. Today, things are different. We have data, lots of it. In the case of the relationship between web platforms (companies) and their users, we are leveraging that data to introduce a regulatory regime of data-driven accountability. Just ask any Uber driver what their chief complaint is, and you'll likely hear that they can get booted off the platform for poor performance, very quickly. Now, the question is: how can we transform our public regulations to adopt this kind of model? Here's the part that no one will like: 1) Regulators need to accept a new model where they focus less on making it hard for people to get started. That means things relaxing licensing requirements (for example, all the states working on Bitcoin licensing right now) and increase the freedom to operate. This is critical for experimentation and innovation. 2) In exchange for that freedom to operate, companies will need to share data with regulators -- un-massaged, and in real time, just like their users do with them. AND, will need to accept that that data may result in forms of accountability. For example, we should give ourselves the opportunity to enjoy the obvious benefits of the Ubers and Airbnbs of the world, but also recognize that Uber could be making NYC traffic worse, and Airbnb could be making SF housing affordability worse. In other words, grant companies the freedoms they grant their users, but also bring the same data-driven accountability:
That is going to be a tough pill to swallow, on both sides, so I'm not sure how we get there. But I believe that if we're honest with ourselves, we will recognize that the approach to regulation that web platforms have brought to their users is an innovation in its own right, and is one that we should aim to apply to the public layer. Over at TechCrunch, Kim-Mai Cutler has been exploring this idea in depth. In her article today, she rightly points out that "Those decisions are tough if no one trusts each other" -- platforms (rightly) don't trust regulators not to instinctively clamp down on new innovations, and regulators don't trust platforms to EITHER play by the existing rules OR provide in-depth data for the sake of accountability. In the meantime, we'll get to observe more battles as the war wages on.
Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described web services as akin to governments, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":
I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.