John Oliver has become the most important voice in tech policy (and maybe policy in general). His gift, his talent, his skill: turning wonky policy language that makes people glaze over into messages that people connect to and care about it. Last fall, he did took what may be the most boring, confusing term ever, Net Neutrality, and made it relatable as Cable Company Fuckery. 8mm people watched that video, and it was a big factor behind the over 4mm comments left at the FCC on an issue that even most tech people had a hard time explaining to each other. Now, he has tackled another mind bending, but really very important topic: surveillance. It's amazing really. Huge, complicated, important issue. Real-life spy stories, with real life hero/villains. And no one gives a shit at all. But when you say it the right way -- in this case: should the government be able to see your dick pic? -- people light up. This is 30 minutes of truly instructive brilliance: The best part - he hands Snowden a folder labeled top secret including a 8x10 photo of his own penis. And asks Snowden to re-explain every NSA spy program in terms of "the dick pic test". On the one hand, you could argue that it's sad that policy issues need to get boiled down to "dick pics" and "fuckery" for people to get them. On the other hand, it's even sadder that the people investing time, energy, and effort in working on these issues (myself included) don't grasp that and use it to make sure ideas connect. Thankfully we have John Oliver to help us with that. This piece is brilliant -- in particular the way he opens Snowden's eyes to the extent to which people don't get this issue, misunderstand who he is and what he did, and need it to be presented to them in a different, simpler way. The major point here is that no matter your feelings on what Snowden did, it's all for naught if it doesn't trigger an actual conversation. And while it's easy for folks in the tech / policy community to feel like that conversation is happening, the truth is that on a broad popular level it's not. So once again John Oliver has shown us how to take a super important, super complicated, and basically ignored issue and put it on the table in a way people can chew on. Bravo. From here on out, I'm going to start looking at every policy issue through the lens of WWJD -- what would john oliver do -- and pick it up from the vegetable garden of policy talk and into the headspace of people on the street.
John Oliver has become the most important voice in tech policy (and maybe policy in general). His gift, his talent, his skill: turning wonky policy language that makes people glaze over into messages that people connect to and care about it. Last fall, he did took what may be the most boring, confusing term ever, Net Neutrality, and made it relatable as Cable Company Fuckery. 8mm people watched that video, and it was a big factor behind the over 4mm comments left at the FCC on an issue that even most tech people had a hard time explaining to each other. Now, he has tackled another mind bending, but really very important topic: surveillance. It's amazing really. Huge, complicated, important issue. Real-life spy stories, with real life hero/villains. And no one gives a shit at all. But when you say it the right way -- in this case: should the government be able to see your dick pic? -- people light up. This is 30 minutes of truly instructive brilliance: The best part - he hands Snowden a folder labeled top secret including a 8x10 photo of his own penis. And asks Snowden to re-explain every NSA spy program in terms of "the dick pic test". On the one hand, you could argue that it's sad that policy issues need to get boiled down to "dick pics" and "fuckery" for people to get them. On the other hand, it's even sadder that the people investing time, energy, and effort in working on these issues (myself included) don't grasp that and use it to make sure ideas connect. Thankfully we have John Oliver to help us with that. This piece is brilliant -- in particular the way he opens Snowden's eyes to the extent to which people don't get this issue, misunderstand who he is and what he did, and need it to be presented to them in a different, simpler way. The major point here is that no matter your feelings on what Snowden did, it's all for naught if it doesn't trigger an actual conversation. And while it's easy for folks in the tech / policy community to feel like that conversation is happening, the truth is that on a broad popular level it's not. So once again John Oliver has shown us how to take a super important, super complicated, and basically ignored issue and put it on the table in a way people can chew on. Bravo. From here on out, I'm going to start looking at every policy issue through the lens of WWJD -- what would john oliver do -- and pick it up from the vegetable garden of policy talk and into the headspace of people on the street.
This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government. Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:
This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation
As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year. For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach. I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation. His full report is here. Relatively quick read, a good starting place for thinking about these ideas. I am off to meet Max for breakfast as we speak! More tomorrow.
This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government. Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:
This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation
As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year. For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach. I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation. His full report is here. Relatively quick read, a good starting place for thinking about these ideas. I am off to meet Max for breakfast as we speak! More tomorrow.
portable
, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about
real-time disclosure requirements
for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use
encryption
and
aggregate data for analysis
to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches.
, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about
real-time disclosure requirements
for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use
encryption
and
aggregate data for analysis
to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches.