Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described
Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described
, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like
), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems
. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":
I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.
I've spent the better part of the last six years thinking about where web standards come from. Before joining USV, I was at the (now retired) urban tech incubator OpenPlans, where, among other things, we worked to further "open" technology solutions, including open data formats and web protocols. The two biggest standards we worked on were GTFS, the now ubiquitous format for transit data, including routes, schedules and real-time data for buses and trains; and Open311, an open protocol for reporting problems to cities (broken streetlights, potholes, etc) and asking questions (how do I dispose of paint cans?). Each has its own origin story, which I'll get into a little bit below. Last week, I wrote about "venture capital vs. community capital" (i.e., the "cycle of domination and disruption") -- and really the point of that talk was the relationship between proprietary platforms and open protocols. My point in that post was that this tension is nothing new; in fact it is a regular part of the continuous cycle of the bundling and unbundling of technologies, dating back, well, forever. Given the emergence of bitcoin and the blockchain as an application platform, it feels like we are in the midst of another wave of energy and effort around the development and deployment of web standards. So we are seeing a ton of new open standards and protocols being imagined, proposed, and developed. The key question to be asking at this moment is not "what is the perfect open standard", but rather, "how do these things come to be, anyway?" Joi Ito talks about the Internet as "a belief system" as much as a technology, and part of how I interpret that is the fact that it rests on the idea of everyone just agreeing to do things kind of the same way. So, we don't all need to run the same computers, use the same ISP, or be members of a common club (social network) -- rather, all we need to do is adhere to some common protocols (HTTP, SMTP, etc). No one owns the protocols (by and large) -- they are more like "customs" than anything else. It works because we all agree to do more or less the same thing. So when we're looking at all these new protocols appearing (from openname, to ethereum, to whatever), the question is not just "is this a good idea" but rather "how might everyone agree to do this?". It's a political and social problem as much as a technical problem. And more often than not, there is some sort of "magic" involved that is the difference between "cool idea" or "nice whitepaper" and "everyone does it this way". Here is a crack at bucketing a few of the major strategies I've observed for bringing standards to market. (These are not necessarily mutually exclusive, and are certainly not complete -- would love to find other patterns and examples.) Update: The Old Fashioned WayMax Bulger makes a good point on Twitter that I have neglected here to include the traditional, formal methods of developing web standards -- though standards bodies like the w3c and the IETF. That's how many standards get made, but not all. For this post, I want to focus on hacks to that traditional process.The Brute Force Approach One way to bring a standard to market is to simply force it in, using your market position as leverage. Apple has been doing this for decades, most recently with USB-C, two decades ago with the original USB.
Word on the street is that USB-C was less of a consensus-driven standards body project and more of an apple hand off. Time will tell, but now that USB-C is the port to beat all ports in the Macbook 12, it could become the single standard for laptop and mobile/tablet ports. You can do this if you're huge (see also: Microsoft and .doc, Adobe and .pdf) The Happy Magnet Approach I mentioned the GTFS standard, which is now the primary way transit agencies publish route, schedule and real-time data. GTFS came to be because of work between Google and Portland's Tri-Met back in 2005, as their collaboration to get Portland's transit data into Google maps - so they created a lightweight standard as part of that. Then, Google used "hey, don't you want your data in Google maps?" as the happy magnet, to draw other agencies (often VERY reluctantly) into publishing their data in GTFS as well. Here's a diagram I made back in 2010 to tell this story:
This approach includes elements of the Brute Force approach -- you need to have outsized leverage / distribution to pull this off. It's also worth noting that GTFS won the day (handily) vs. a number of similar formats that were being developed by the formal consortia of transit operators. I remember talking to folks at the time who had been working on these other standards, who were pissed that Google just swept in and helped bring GTFS to market. But that's exactly the point I want to make here: a path to market is often more is more important than a perfect design. The Awesome Partner Approach Not really knowing the whole story behind Creative Commons, it seems to me that one of the huge moments for that project was their partnership with Flickr to bring CC licensed photos to market -- giving photographers the ability to tag with CC licenses, and giving users the ability to search by CC. CC was a small org, but they were able to partner with a large player to get reach and distribution. The Make-them-an-offer-they-can't-refuse Approach Blockchain hacker Matan Field recently described the two big innovations of bitcoin as 1) the ledger and 2) the incentive mechanism. The incentive mechanism is the key -- bitcoin and similar cryptoequity projects have a built-in incentive to participate. Give (compute cycles) and get (coins/tokens). While the Bitcoin whitepaper could have been "just another whitepaper" (future blog post needed on that -- aka the open standards graveyard), it had a powerful built-in incentive model that drew people in. The Bottom-up Approach At our team meeting on Monday, we got to discussing how oAuth came to be. (for those not familiar, oAuth is the standard protocol for allowing one app to perform actions for you in a different app -- e.g., allow this app to post to twitter for me, etc). According to the history on Wikipedia, oAuth started with the desire to delegate API access between Twitter and Magnolia, using OpenID, and from there a group of open web hackers took the project on. First as an informal collaboration, then as a more organized discussion group, and finally as a formal proposal and working group at IETF. From being around the folks working on this at the time, it felt like a very organic, bottom-up situation. Less of a theoretical top-down need and more of a simple practical solution to a point-to-point problem that grew into something bigger. The Pretty Pretty Please Approach (aka the Herding Cats Approach) This one is hard. Come up with a standard, and work really hard to get everyone to agree that it's a good idea and adopt it. It's not impossible to do this, but it's not easy. This is more or less the approach we took back in 2009-12 with Open311. In 2009, John Geraci from DIYcity (a civic hacking community at the time) wrote a letter to Mayor Bloomberg suggesting NYC take an open approach to its 311 system (I worked on the letter with John, as did several of my colleagues at the time). From there, Philip Ashlock from OpenPlans took the lead on turning it into a real thing -- working doggedly for 2 years with cities across the US, technology vendors large and small, and adjacent orgs like Code For America, to develop the specification and get it deployed. As of 2012, there were something like 50 cities and 10 vendors live on the standard. I would say that Open311 never had the "slingshot" or "magnet" it really needed to become huge and impactful -- it was more of a slow grind. But Phil in particular gets tons and tons of credit for making it happen. And...? In thinking about this, I also looked into the history of foundational web standards like HTTP and SMTP. Here is Tim Berners-Lee's original concept for an online hypertext system, and here is his more formal proposal to his bosses at CERN to fund initial work on the project. He asked for $50k in manpower and $30k in software licenses. Glad his bosses gave him the green light! Here is John Postel's original proposal for SMTP (the primary protocol behind email) to the IETF networking group. I honestly don't know the politics of how either of these went from whitepaper to real, and I'd love to hear that story from anyone who knows. Another good story is HTML5, which was begun by a splinter faction away from W3C (dodging the slow process there and the focus on XHTML), and then eventually merged back into the formal W3C process. One lesson One big takeaway I've had from working on all of this is that these things take time, and that if you're playing the open standards game, you need the ability to be patient (in addition to having a clever go-to-market hack). It's difficult to push a standard on a startup timeline. You'll notice that many the historical players here had full-time employers (CERN, Google, universities, etc) that gave them the stability they needed and the flexibility to devote time to this sort of project. And to reiterate the main point here -- when looking at emerging standards and protocols, we've got to focus on the question "how do we get there", and think hard about which go-to-market strategy to take. // P.S., for a funny and slightly NSFW twist on this, see this post about the book "Where Did I Come From?" which I thought of when writing this post -- my parents definitely read me that book when I was a kid and it made an impression.
, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like
), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems
. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":
I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.
I've spent the better part of the last six years thinking about where web standards come from. Before joining USV, I was at the (now retired) urban tech incubator OpenPlans, where, among other things, we worked to further "open" technology solutions, including open data formats and web protocols. The two biggest standards we worked on were GTFS, the now ubiquitous format for transit data, including routes, schedules and real-time data for buses and trains; and Open311, an open protocol for reporting problems to cities (broken streetlights, potholes, etc) and asking questions (how do I dispose of paint cans?). Each has its own origin story, which I'll get into a little bit below. Last week, I wrote about "venture capital vs. community capital" (i.e., the "cycle of domination and disruption") -- and really the point of that talk was the relationship between proprietary platforms and open protocols. My point in that post was that this tension is nothing new; in fact it is a regular part of the continuous cycle of the bundling and unbundling of technologies, dating back, well, forever. Given the emergence of bitcoin and the blockchain as an application platform, it feels like we are in the midst of another wave of energy and effort around the development and deployment of web standards. So we are seeing a ton of new open standards and protocols being imagined, proposed, and developed. The key question to be asking at this moment is not "what is the perfect open standard", but rather, "how do these things come to be, anyway?" Joi Ito talks about the Internet as "a belief system" as much as a technology, and part of how I interpret that is the fact that it rests on the idea of everyone just agreeing to do things kind of the same way. So, we don't all need to run the same computers, use the same ISP, or be members of a common club (social network) -- rather, all we need to do is adhere to some common protocols (HTTP, SMTP, etc). No one owns the protocols (by and large) -- they are more like "customs" than anything else. It works because we all agree to do more or less the same thing. So when we're looking at all these new protocols appearing (from openname, to ethereum, to whatever), the question is not just "is this a good idea" but rather "how might everyone agree to do this?". It's a political and social problem as much as a technical problem. And more often than not, there is some sort of "magic" involved that is the difference between "cool idea" or "nice whitepaper" and "everyone does it this way". Here is a crack at bucketing a few of the major strategies I've observed for bringing standards to market. (These are not necessarily mutually exclusive, and are certainly not complete -- would love to find other patterns and examples.) Update: The Old Fashioned WayMax Bulger makes a good point on Twitter that I have neglected here to include the traditional, formal methods of developing web standards -- though standards bodies like the w3c and the IETF. That's how many standards get made, but not all. For this post, I want to focus on hacks to that traditional process.The Brute Force Approach One way to bring a standard to market is to simply force it in, using your market position as leverage. Apple has been doing this for decades, most recently with USB-C, two decades ago with the original USB.
Word on the street is that USB-C was less of a consensus-driven standards body project and more of an apple hand off. Time will tell, but now that USB-C is the port to beat all ports in the Macbook 12, it could become the single standard for laptop and mobile/tablet ports. You can do this if you're huge (see also: Microsoft and .doc, Adobe and .pdf) The Happy Magnet Approach I mentioned the GTFS standard, which is now the primary way transit agencies publish route, schedule and real-time data. GTFS came to be because of work between Google and Portland's Tri-Met back in 2005, as their collaboration to get Portland's transit data into Google maps - so they created a lightweight standard as part of that. Then, Google used "hey, don't you want your data in Google maps?" as the happy magnet, to draw other agencies (often VERY reluctantly) into publishing their data in GTFS as well. Here's a diagram I made back in 2010 to tell this story:
This approach includes elements of the Brute Force approach -- you need to have outsized leverage / distribution to pull this off. It's also worth noting that GTFS won the day (handily) vs. a number of similar formats that were being developed by the formal consortia of transit operators. I remember talking to folks at the time who had been working on these other standards, who were pissed that Google just swept in and helped bring GTFS to market. But that's exactly the point I want to make here: a path to market is often more is more important than a perfect design. The Awesome Partner Approach Not really knowing the whole story behind Creative Commons, it seems to me that one of the huge moments for that project was their partnership with Flickr to bring CC licensed photos to market -- giving photographers the ability to tag with CC licenses, and giving users the ability to search by CC. CC was a small org, but they were able to partner with a large player to get reach and distribution. The Make-them-an-offer-they-can't-refuse Approach Blockchain hacker Matan Field recently described the two big innovations of bitcoin as 1) the ledger and 2) the incentive mechanism. The incentive mechanism is the key -- bitcoin and similar cryptoequity projects have a built-in incentive to participate. Give (compute cycles) and get (coins/tokens). While the Bitcoin whitepaper could have been "just another whitepaper" (future blog post needed on that -- aka the open standards graveyard), it had a powerful built-in incentive model that drew people in. The Bottom-up Approach At our team meeting on Monday, we got to discussing how oAuth came to be. (for those not familiar, oAuth is the standard protocol for allowing one app to perform actions for you in a different app -- e.g., allow this app to post to twitter for me, etc). According to the history on Wikipedia, oAuth started with the desire to delegate API access between Twitter and Magnolia, using OpenID, and from there a group of open web hackers took the project on. First as an informal collaboration, then as a more organized discussion group, and finally as a formal proposal and working group at IETF. From being around the folks working on this at the time, it felt like a very organic, bottom-up situation. Less of a theoretical top-down need and more of a simple practical solution to a point-to-point problem that grew into something bigger. The Pretty Pretty Please Approach (aka the Herding Cats Approach) This one is hard. Come up with a standard, and work really hard to get everyone to agree that it's a good idea and adopt it. It's not impossible to do this, but it's not easy. This is more or less the approach we took back in 2009-12 with Open311. In 2009, John Geraci from DIYcity (a civic hacking community at the time) wrote a letter to Mayor Bloomberg suggesting NYC take an open approach to its 311 system (I worked on the letter with John, as did several of my colleagues at the time). From there, Philip Ashlock from OpenPlans took the lead on turning it into a real thing -- working doggedly for 2 years with cities across the US, technology vendors large and small, and adjacent orgs like Code For America, to develop the specification and get it deployed. As of 2012, there were something like 50 cities and 10 vendors live on the standard. I would say that Open311 never had the "slingshot" or "magnet" it really needed to become huge and impactful -- it was more of a slow grind. But Phil in particular gets tons and tons of credit for making it happen. And...? In thinking about this, I also looked into the history of foundational web standards like HTTP and SMTP. Here is Tim Berners-Lee's original concept for an online hypertext system, and here is his more formal proposal to his bosses at CERN to fund initial work on the project. He asked for $50k in manpower and $30k in software licenses. Glad his bosses gave him the green light! Here is John Postel's original proposal for SMTP (the primary protocol behind email) to the IETF networking group. I honestly don't know the politics of how either of these went from whitepaper to real, and I'd love to hear that story from anyone who knows. Another good story is HTML5, which was begun by a splinter faction away from W3C (dodging the slow process there and the focus on XHTML), and then eventually merged back into the formal W3C process. One lesson One big takeaway I've had from working on all of this is that these things take time, and that if you're playing the open standards game, you need the ability to be patient (in addition to having a clever go-to-market hack). It's difficult to push a standard on a startup timeline. You'll notice that many the historical players here had full-time employers (CERN, Google, universities, etc) that gave them the stability they needed and the flexibility to devote time to this sort of project. And to reiterate the main point here -- when looking at emerging standards and protocols, we've got to focus on the question "how do we get there", and think hard about which go-to-market strategy to take. // P.S., for a funny and slightly NSFW twist on this, see this post about the book "Where Did I Come From?" which I thought of when writing this post -- my parents definitely read me that book when I was a kid and it made an impression.