Here's a slide from 2009, when we were convincing transit agencies to open up their data, and then later building MTA BusTIme:

And here's one from yesterday, from a talk I gave at the Shift Conference (blog post to follow w more on that):

Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described web services as akin to governments, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":

I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.


I've spent the better part of the last six years thinking about where web standards come from. Before joining USV, I was at the (now retired) urban tech incubator OpenPlans, where, among other things, we worked to further "open" technology solutions, including open data formats and web protocols. The two biggest standards we worked on were GTFS, the now ubiquitous format for transit data, including routes, schedules and real-time data for buses and trains; and Open311, an open protocol for reporting problems to cities (broken streetlights, potholes, etc) and asking questions (how do I dispose of paint cans?). Each has its own origin story, which I'll get into a little bit below. Last week, I wrote about "venture capital vs. community capital" (i.e., the "cycle of domination and disruption") -- and really the point of that talk was the relationship between proprietary platforms and open protocols. My point in that post was that this tension is nothing new; in fact it is a regular part of the continuous cycle of the bundling and unbundling of technologies, dating back, well, forever. Given the emergence of bitcoin and the blockchain as an application platform, it feels like we are in the midst of another wave of energy and effort around the development and deployment of web standards. So we are seeing a ton of new open standards and protocols being imagined, proposed, and developed. The key question to be asking at this moment is not "what is the perfect open standard", but rather, "how do these things come to be, anyway?" Joi Ito talks about the Internet as "
Word on the street is that USB-C was less of a consensus-driven standards body project and more of an apple hand off. Time will tell, but now that USB-C is the port to beat all ports in the Macbook 12, it could become the single standard for laptop and mobile/tablet ports. You can do this if you're huge (see also: Microsoft and .doc, Adobe and .pdf) The Happy Magnet Approach I mentioned the GTFS standard, which is now the primary way transit agencies publish route, schedule and real-time data. GTFS came to be because of work between Google and Portland's Tri-Met back in 2005, as their collaboration to get Portland's transit data into Google maps - so they created a lightweight standard as part of that. Then, Google used "hey, don't you want your data in Google maps?" as the happy magnet, to draw other agencies (often VERY reluctantly) into publishing their data in GTFS as well. Here's a diagram I made back in 2010 to tell this story:
This approach includes elements of the Brute Force approach -- you need to have outsized leverage / distribution to pull this off. It's also worth noting that GTFS won the day (handily) vs. a number of similar formats that were being developed by the formal consortia of transit operators. I remember talking to folks at the time who had been working on these other standards, who were pissed that Google just swept in and helped bring GTFS to market. But that's exactly the point I want to make here: a path to market is often more is more important than a perfect design. The Awesome Partner Approach Not really knowing the whole story behind Creative Commons, it seems to me that one of the huge moments for that project was their partnership with Flickr to bring CC licensed photos to market -- giving photographers the ability to tag with CC licenses, and giving users the ability to search by CC. CC was a small org, but they were able to partner with a large player to get reach and distribution. The Make-them-an-offer-they-can't-refuse Approach Blockchain hacker Matan Field recently described the two big innovations of bitcoin as 1) the ledger and 2) the incentive mechanism. The incentive mechanism is the key -- bitcoin and similar cryptoequity projects have a built-in incentive to participate. Give (compute cycles) and get (coins/tokens). While the Bitcoin whitepaper could have been "just another whitepaper" (future blog post needed on that -- aka the open standards graveyard), it had a powerful built-in incentive model that drew people in. The Bottom-up Approach At our team meeting on Monday, we got to discussing how oAuth came to be. (for those not familiar, oAuth is the standard protocol for allowing one app to perform actions for you in a different app -- e.g., allow this app to post to twitter for me, etc). According to the history on Wikipedia, oAuth started with the desire to delegate API access between Twitter and Magnolia, using OpenID, and from there a group of open web hackers took the project on. First as an informal collaboration, then as a more organized discussion group, and finally as a formal proposal and working group at IETF. From being around the folks working on this at the time, it felt like a very organic, bottom-up situation. Less of a theoretical top-down need and more of a simple practical solution to a point-to-point problem that grew into something bigger.
Here's a slide from 2009, when we were convincing transit agencies to open up their data, and then later building MTA BusTIme:

And here's one from yesterday, from a talk I gave at the Shift Conference (blog post to follow w more on that):

Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described web services as akin to governments, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":

I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.
The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.


I've spent the better part of the last six years thinking about where web standards come from. Before joining USV, I was at the (now retired) urban tech incubator OpenPlans, where, among other things, we worked to further "open" technology solutions, including open data formats and web protocols. The two biggest standards we worked on were GTFS, the now ubiquitous format for transit data, including routes, schedules and real-time data for buses and trains; and Open311, an open protocol for reporting problems to cities (broken streetlights, potholes, etc) and asking questions (how do I dispose of paint cans?). Each has its own origin story, which I'll get into a little bit below. Last week, I wrote about "venture capital vs. community capital" (i.e., the "cycle of domination and disruption") -- and really the point of that talk was the relationship between proprietary platforms and open protocols. My point in that post was that this tension is nothing new; in fact it is a regular part of the continuous cycle of the bundling and unbundling of technologies, dating back, well, forever. Given the emergence of bitcoin and the blockchain as an application platform, it feels like we are in the midst of another wave of energy and effort around the development and deployment of web standards. So we are seeing a ton of new open standards and protocols being imagined, proposed, and developed. The key question to be asking at this moment is not "what is the perfect open standard", but rather, "how do these things come to be, anyway?" Joi Ito talks about the Internet as "
Word on the street is that USB-C was less of a consensus-driven standards body project and more of an apple hand off. Time will tell, but now that USB-C is the port to beat all ports in the Macbook 12, it could become the single standard for laptop and mobile/tablet ports. You can do this if you're huge (see also: Microsoft and .doc, Adobe and .pdf) The Happy Magnet Approach I mentioned the GTFS standard, which is now the primary way transit agencies publish route, schedule and real-time data. GTFS came to be because of work between Google and Portland's Tri-Met back in 2005, as their collaboration to get Portland's transit data into Google maps - so they created a lightweight standard as part of that. Then, Google used "hey, don't you want your data in Google maps?" as the happy magnet, to draw other agencies (often VERY reluctantly) into publishing their data in GTFS as well. Here's a diagram I made back in 2010 to tell this story:
This approach includes elements of the Brute Force approach -- you need to have outsized leverage / distribution to pull this off. It's also worth noting that GTFS won the day (handily) vs. a number of similar formats that were being developed by the formal consortia of transit operators. I remember talking to folks at the time who had been working on these other standards, who were pissed that Google just swept in and helped bring GTFS to market. But that's exactly the point I want to make here: a path to market is often more is more important than a perfect design. The Awesome Partner Approach Not really knowing the whole story behind Creative Commons, it seems to me that one of the huge moments for that project was their partnership with Flickr to bring CC licensed photos to market -- giving photographers the ability to tag with CC licenses, and giving users the ability to search by CC. CC was a small org, but they were able to partner with a large player to get reach and distribution. The Make-them-an-offer-they-can't-refuse Approach Blockchain hacker Matan Field recently described the two big innovations of bitcoin as 1) the ledger and 2) the incentive mechanism. The incentive mechanism is the key -- bitcoin and similar cryptoequity projects have a built-in incentive to participate. Give (compute cycles) and get (coins/tokens). While the Bitcoin whitepaper could have been "just another whitepaper" (future blog post needed on that -- aka the open standards graveyard), it had a powerful built-in incentive model that drew people in. The Bottom-up Approach At our team meeting on Monday, we got to discussing how oAuth came to be. (for those not familiar, oAuth is the standard protocol for allowing one app to perform actions for you in a different app -- e.g., allow this app to post to twitter for me, etc). According to the history on Wikipedia, oAuth started with the desire to delegate API access between Twitter and Magnolia, using OpenID, and from there a group of open web hackers took the project on. First as an informal collaboration, then as a more organized discussion group, and finally as a formal proposal and working group at IETF. From being around the folks working on this at the time, it felt like a very organic, bottom-up situation. Less of a theoretical top-down need and more of a simple practical solution to a point-to-point problem that grew into something bigger.
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog