
Today at USV, we are hosting our 4th semiannual Trust, Safety and Security Summit. Brittany, who manages the USV portfolio network, runs about 60 events per year -- each one a peer-driven, peer-learning experience, like a mini-unconference on topics like engineering, people, design, etc. The USV network is really incredible and the summits are a big part of it. I always attend the Trust, Safety and Security summits as part of my policy-focused work. Pretty much every network we are investors in has a "trust and safety" team which deals with issues ranging from content policies (spam, harassment, etc) to physical safety (on networks with a real-world component), to dealing with law enforcement. We also include security here (data security, physical security) here -- often managed by a different team but with many overlapping issues as T&S. What's amazing to witness when working with Trust, Safety and Security teams is that they are rapidly innovating on policy. We've long described web services as akin to governments, and it's within this area where this is most apparent. Each community is developing its own practices and norms and rapidly iterating on the design of its policies based on lots and lots and lots of real-time data. What's notable is that across the wide variety in platforms (from messaging apps like Kik, to marketplaces like Etsy and Kickstarter, to real-world networks like Kitchensurfing and Sidecar, to security services like Cloudflare and Sift Science), the common element in terms of policy is the ability to handle the onboarding of millions of new years per day thanks to data-driven, peer-produced policy devices -- which you could largely classify as "reputation systems". Note that this approach works for "centralized" networks like the ones listed above, as well as for decentralized systems (like email and bitcoin) and that governing in decentralized systems has its own set of challenges. This is a fundamentally different regulatory model than what we have in the real world. On the internet, the model is "go ahead and do -- but we'll track it and your reputation will be affected if you're a bad actor", whereas with real-world government, the model is more "get our permission first, then go do". I've described this before as "regulation 1.0" vs. "regulation 2.0":

I recently wrote a white paper for the Data-Smart City Solutions program at the Harvard Kennedy School on this topic, which I have neglected to blog about here so far. It's quite long, but the above is basically the TL;DR version. I mention it today because we continue to be faced with the challenge of applying regulation 1.0 models to a regulation 2.0 world. Here are two examples: First, the NYC Taxi and Limousine commission's recently proposed rules for regulating on-demand ride applications. At least two aspects of the proposed rules are really problematic:
TLC wants to require their sign off on any new on-demand ride apps, including all updates to existing apps.
TLC will limit any driver to having only one active device in their car
On #1: apps ship updates nearly every day. Imagine adding a layer of regulatory approval to that step. And imagine that that approval needs to come from a government agency without deep expertise in application development. It's bad enough that developers need Apple's approval to ship iOS apps -- we simply cannot allow for this kind of friction when bringing products to market. On #2: the last thing we want to do is introduce artificial scarcity into the system. The beauty of regulation 2.0 is that we can welcome new entrants, welcome innovations, and welcome competition. We don't need to impose barriers and limits. And we certainly don't want new regulations to entrench incumbents (whether that's the existing taxi/livery system or new incumbents like Uber) Second, the NYS Dept of Financial Services this week released their final BitLicense, which will regulate bitcoin service providers. Coin Center has a detailed response to the BitLicense framework, which points out the following major flaws:
Anti money laundering requirements are improved but vague.
A requirement that new products be pre-approved by the NYDFS superintendent.
Custody or control of consumer funds is not defined in a way that takes full account of the technology’s capabilities.
Language which could prevent businesses from lawfully protecting customers from publicly revealing their transaction histories.
The lack of a defined onramp for startups.
Without getting to all the details, I'll note two big ones, which are DFS preapproval for all app updates (same as with TLC) and the "lack of a defined on-ramp for startups". This idea of an "on-ramp" is critical, and is the key thing that all the web platforms referenced at the top of this post get right, and is the core idea behind regulation 2.0. Because we collect so much data in real-time, we can vastly open up the "on-ramps" whether those are for new customers/users (in the case of web platforms) or for new startups (in the case of government regulations). The challenge, here, is that we ultimately need to decide to make a pretty profound trade: trading up-front, permission-based systems, for open systems made accountable through data.

The challenge here is exacerbated by the fact that it will be resisted on both sides: governments will not want to relinquish the ability to grant permissions, and platforms will not want to relinquish data. So perhaps we will remain at a standoff, or perhaps we can find an opportunity to consciously make that trade -- dropping permission requirements in exchange for opening up more data. This is the core idea behind my Regulation 2.0 white paper, and I suspect we'll see the opportunity to do this play out again and again in the coming months and years.

This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government. Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:

This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use encryption and aggregate data for analysis to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches. // thanks to Brittany Laughlin and Aaron Wright for their input on this post
As part of my series on Regulation 2.0, which I'm putting together for the Project on Municipal Innovation at the Harvard Kennedy School, today I am going to employ a bit of a cop-out tactic and rather than publish my next section (which I haven't finished yet, largely because my whole family has the flu right now), I will publish a report written earlier this year by my friend Max Pomeranc. Max is a former congressional chief of staff, who did his masters at the Kennedy School last year. For his "policy analysis exercise" (essentially a thesis paper) Max looked at regulation and the peer economy, exploring the idea of a "2.0" approach. I was Max's advisor for the paper, and he has since gone on to a policy job at Airbnb. Max did a great job of looking at two recent examples of peer economy meets regulation: the California ridesharing rules, and the JOBS act for equity crowdfunding, and exploring some concepts which could be part of a "2.0" approach to regulation. His full report is here. Relatively quick read, a good starting place for thinking about these ideas. I am off to meet Max for breakfast as we speak! More tomorrow.