This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government. Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web. This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:
Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
Trust, a bit more nuanced in how it's created, is creating the explicit and implicit contracts between the company, customers and employees.
Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.
An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before. The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers. The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t -- building on the “trust, then verify” model -- but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:
This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:
How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?
Here are a few ideas to address these questions: With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors. Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach. Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer. Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input. Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate. Related, consider ways to use encryption and aggregate data for analysis to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement. While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches. // thanks to Brittany Laughlin and Aaron Wright for their input on this post