Last week, a friend passed away after a relatively brief but intense battle with lung cancer. I didn't know Paul well, but he was very close with a few of my very close friends, and I had spent enough time with him to understand that he was special: he had a light inside of him. A curiosity and energy that opened his eyes, lit up his mind, and made him go. I could feel it the first time I met him. Later last week, I was traveling for work, first to the Bay Area and then to Austin. In both cases, I spent time with old, great friends who I don't get to see very often. And in both cases, I stayed out way later than I should and usually do, and drank way more than I should or usually do. But as with most things like that there is good along with the bad. Not that surprisingly, I guess, I ended up having almost the same conversation in both places. And that conversation was about how wonderful it is to find people in your life who have that light inside of them. There are so many nasty people in the world, and being a kid (or a grownup, for that matter) can be hard, and compoundingly hard the fewer sources of light you have in your life. You don't always find people with that light -- right away, or sometimes at all. So when you do, it is so fantastic. Thinking about Paul, and thinking about my friends, it made me so thankful for the people in my life who are generous, curious, thoughtful, creative, and full of light. It has taken a long time to find them. And it's also so easy to take them for granted if you're not careful. It makes me think about how we find people who have this (specifically, on the internet), and also how we find and cultivate our own center of light, often in the face of doubt and difficulty. Coincidentally, on the flight home, I started reading James Altschuler's book, Choose Yourself. One idea he stresses is that the key to happiness (and success) is lighting and stoking the fire inside you, and letting everything else flow from that. And that this can be hard to do, since there are plenty of ways for us to doubt ourselves, get distracted, and lose faith that our fire is real and valuable. He calls it a fire, not a light, but it's the same idea. It's getting over whatever bullshit may be in your way (approval of others, fitting in, etc), and focusing on the energy, the fire, at your core. Being honest and accepting about what that is, and letting that guide your way to more and more. It's a simple idea, but has really stuck with me. So, I'll just end by thanking everyone who puts light out into the world, including many of those in my current orbit who I appreciate enormously. And by encouraging myself and everyone else to keep stoking the fire, even when it's hard to do.
As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year. For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach. I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation. His full report is
Last week, a friend passed away after a relatively brief but intense battle with lung cancer. I didn't know Paul well, but he was very close with a few of my very close friends, and I had spent enough time with him to understand that he was special: he had a light inside of him. A curiosity and energy that opened his eyes, lit up his mind, and made him go. I could feel it the first time I met him. Later last week, I was traveling for work, first to the Bay Area and then to Austin. In both cases, I spent time with old, great friends who I don't get to see very often. And in both cases, I stayed out way later than I should and usually do, and drank way more than I should or usually do. But as with most things like that there is good along with the bad. Not that surprisingly, I guess, I ended up having almost the same conversation in both places. And that conversation was about how wonderful it is to find people in your life who have that light inside of them. There are so many nasty people in the world, and being a kid (or a grownup, for that matter) can be hard, and compoundingly hard the fewer sources of light you have in your life. You don't always find people with that light -- right away, or sometimes at all. So when you do, it is so fantastic. Thinking about Paul, and thinking about my friends, it made me so thankful for the people in my life who are generous, curious, thoughtful, creative, and full of light. It has taken a long time to find them. And it's also so easy to take them for granted if you're not careful. It makes me think about how we find people who have this (specifically, on the internet), and also how we find and cultivate our own center of light, often in the face of doubt and difficulty. Coincidentally, on the flight home, I started reading James Altschuler's book, Choose Yourself. One idea he stresses is that the key to happiness (and success) is lighting and stoking the fire inside you, and letting everything else flow from that. And that this can be hard to do, since there are plenty of ways for us to doubt ourselves, get distracted, and lose faith that our fire is real and valuable. He calls it a fire, not a light, but it's the same idea. It's getting over whatever bullshit may be in your way (approval of others, fitting in, etc), and focusing on the energy, the fire, at your core. Being honest and accepting about what that is, and letting that guide your way to more and more. It's a simple idea, but has really stuck with me. So, I'll just end by thanking everyone who puts light out into the world, including many of those in my current orbit who I appreciate enormously. And by encouraging myself and everyone else to keep stoking the fire, even when it's hard to do.
As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year. For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach. I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation. His full report is
The Slow Hunch by Nick Grossman
Investing @ USV. Student of cities and the internet.
The Slow Hunch by Nick Grossman
Investing @ USV. Student of cities and the internet.
Program on Municipal Innovation
at the Harvard Kennedy School of Government.
Yesterday, the Boston Globe reported that an Uber driver
. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for
built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:
This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use encryption and aggregate data for analysis to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches. // thanks to Brittany Laughlin and Aaron Wright for their input on this post
. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for
built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:
This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use encryption and aggregate data for analysis to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches. // thanks to Brittany Laughlin and Aaron Wright for their input on this post