This is part 3 in a series of posts I'm developing into a white paper on "Regulation 2.0" for the Program on Municipal Innovation Harvard Kennedy School of Government. For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you! They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today's social/collaborative web platforms. Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement. Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets. As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another. In 2010, my colleague Brad Burnham
This is part 3 in a series of posts I'm developing into a white paper on "Regulation 2.0" for the Program on Municipal Innovation Harvard Kennedy School of Government. For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you! They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today's social/collaborative web platforms. Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement. Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets. As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another. In 2010, my colleague Brad Burnham
suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success. Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over. As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea. So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them:
Built on information:
The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century.
Trust by default (but verify):
Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go.
Busier is better:
It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems -- and eventually all regulatory systems that are built on software and data -- work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see.
Responsive policy development:
Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad. If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information. So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.
For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government. This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available. An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop. I've had the honor of attending this meeting a few times in the past, and I must say it's inspiring and encouraging to see urban leaders from across the US come together to learn from one another. One of the PMI's latest projects is an initiative on regulatory reform -- studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary. As part of this initiative, I've been writing up a short white paper on "Regulation 2.0" -- the idea that government can adopt some of the "regulatory" techniques pioneered by web platforms to achieve trust and safety at scale. Over the course of this week, I'll publish my latest drafts of the sections of the paper. Here's the outline I'm working on:
Regulation 1.0 vs. Regulation 2.0: an example
Context: technological revolutions and the search for trust
Today’s conflict: some concrete examples
Web platforms as regulatory systems
Regulation 2.0: applying the lessons of web platform regulation to the real world
Section 1 will be an adaptation of this post from last year. My latest draft of section 2 is below. I'll publish the remaining sections over the course of this week. As always, any and all feedback is greatly appreciated! ====
Yesterday I spent part of the afternoon at a US Patent & Trademark Office roundtable discussion on using crowdsourcing to improve the patent examination process. Thanks to Chris Wong for looping me in and helping to organize the event. If you're interested, you can watch the whole video here. I was there not as an expert in patents, but as someone who represents lots of small startup internet companies facing patent issues, and as someone who spends a lot of time on the problem of how to solve challenges through collaborative processes (basically everything USV invests in). Here are my slides: And I'll just highlight two important points: First: why do we care about this? Because (generally speaking) small internet companies typically see more harm than benefit from the patent system:
And second, there are many ways to contemplate "crowdsourcing" with regard to patent examinations. In the most straightforward sense, the PTO could construct a way for outsiders to submit prior art on pending patent applications -- this is the model pioneered by Peer to Patent, and built upon by Stack Exchange's Ask Patents community. The challenge with this approach is that while structured crowdsourcing around complex problems is proven to work, it's really hard to get right. A big risk facing the PTO is investing a lot in a web interface for this, in a "big bang" sort of way (a la healthcare.gov), not getting it right, and then seeing the whole thing as a failure. To that end, I posed the ideas that getting "crowdsourcing" right is really a cultural issue, not a technical issue. In other words, making it work is not just about building the right website and hoping people will come. Getting it right will mean changing the way you connect with and engage with "the crowd". As Micah Siegel from Ask Patents put it, "you can't do crowdsourcing without a crowd". We also talked about the importance of APIs and open data in all of this, so that people can build applications (simple ones, like notifications or tweets, or complex ones involving workflow) around the exam process. Tying those three ideas together (changing culture, going where "the crowd" already is, and taking an API-first approach), it seems like there is a super clear path to getting started:
Set up a simple, public "uspto-developers" google group and invite interested developers to join the discussion there.
Stand up a basic API for patent search that sites like Ask Patents and others could use (they specifically asked for this, and already have an active community).
That would be a really simple way to start, would be guaranteed to bear fruit in the near term, and would also help guide subsequent steps Or, to put it in more buzzwordy terms:
It felt like a productive discussion -- I appreciate how hard it is to approach an old problem in a new way, and get the sense that the PTO is taking a real stab at it.
suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success. Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over. As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea. So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them:
Built on information:
The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century.
Trust by default (but verify):
Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go.
Busier is better:
It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems -- and eventually all regulatory systems that are built on software and data -- work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see.
Responsive policy development:
Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad. If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information. So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.
For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government. This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available. An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop. I've had the honor of attending this meeting a few times in the past, and I must say it's inspiring and encouraging to see urban leaders from across the US come together to learn from one another. One of the PMI's latest projects is an initiative on regulatory reform -- studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary. As part of this initiative, I've been writing up a short white paper on "Regulation 2.0" -- the idea that government can adopt some of the "regulatory" techniques pioneered by web platforms to achieve trust and safety at scale. Over the course of this week, I'll publish my latest drafts of the sections of the paper. Here's the outline I'm working on:
Regulation 1.0 vs. Regulation 2.0: an example
Context: technological revolutions and the search for trust
Today’s conflict: some concrete examples
Web platforms as regulatory systems
Regulation 2.0: applying the lessons of web platform regulation to the real world
Section 1 will be an adaptation of this post from last year. My latest draft of section 2 is below. I'll publish the remaining sections over the course of this week. As always, any and all feedback is greatly appreciated! ====
Yesterday I spent part of the afternoon at a US Patent & Trademark Office roundtable discussion on using crowdsourcing to improve the patent examination process. Thanks to Chris Wong for looping me in and helping to organize the event. If you're interested, you can watch the whole video here. I was there not as an expert in patents, but as someone who represents lots of small startup internet companies facing patent issues, and as someone who spends a lot of time on the problem of how to solve challenges through collaborative processes (basically everything USV invests in). Here are my slides: And I'll just highlight two important points: First: why do we care about this? Because (generally speaking) small internet companies typically see more harm than benefit from the patent system:
And second, there are many ways to contemplate "crowdsourcing" with regard to patent examinations. In the most straightforward sense, the PTO could construct a way for outsiders to submit prior art on pending patent applications -- this is the model pioneered by Peer to Patent, and built upon by Stack Exchange's Ask Patents community. The challenge with this approach is that while structured crowdsourcing around complex problems is proven to work, it's really hard to get right. A big risk facing the PTO is investing a lot in a web interface for this, in a "big bang" sort of way (a la healthcare.gov), not getting it right, and then seeing the whole thing as a failure. To that end, I posed the ideas that getting "crowdsourcing" right is really a cultural issue, not a technical issue. In other words, making it work is not just about building the right website and hoping people will come. Getting it right will mean changing the way you connect with and engage with "the crowd". As Micah Siegel from Ask Patents put it, "you can't do crowdsourcing without a crowd". We also talked about the importance of APIs and open data in all of this, so that people can build applications (simple ones, like notifications or tweets, or complex ones involving workflow) around the exam process. Tying those three ideas together (changing culture, going where "the crowd" already is, and taking an API-first approach), it seems like there is a super clear path to getting started:
Set up a simple, public "uspto-developers" google group and invite interested developers to join the discussion there.
Stand up a basic API for patent search that sites like Ask Patents and others could use (they specifically asked for this, and already have an active community).
That would be a really simple way to start, would be guaranteed to bear fruit in the near term, and would also help guide subsequent steps Or, to put it in more buzzwordy terms:
It felt like a productive discussion -- I appreciate how hard it is to approach an old problem in a new way, and get the sense that the PTO is taking a real stab at it.
Technological revolutions and the search for trust
The search for trust amidst rapid change, as described in the
, is not a new thing. It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game. We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century. Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1] This sounds very big and scary, of course, and it is. Perez’s study of technological revolutions over the past 250 years -- five distinct great surges lasting roughly fifty years each -- shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations. It’s a wild, turbulent and unpredictable process. Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern: First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession. Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes. And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.
So essentially: wild growth, societal disruption, then readjustment and broad adoption. Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:
“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]
In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one. Same goals, new methods. As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution. In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:
Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.
This sounds a lot like today, right? A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust. During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions -- on the private side, firms with trustworthy brands, and on the public side, regulatory bodies -- that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society. Coming back to today, we are currently in the middle of one of these 50-year surges -- the paradigm of networked information -- and that we are roughly in the middle of the above graph -- we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies. A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety. Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era. === Footnotes:
, is not a new thing. It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game. We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century. Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1] This sounds very big and scary, of course, and it is. Perez’s study of technological revolutions over the past 250 years -- five distinct great surges lasting roughly fifty years each -- shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations. It’s a wild, turbulent and unpredictable process. Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern: First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession. Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes. And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.
So essentially: wild growth, societal disruption, then readjustment and broad adoption. Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:
“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]
In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one. Same goals, new methods. As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution. In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:
Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.
This sounds a lot like today, right? A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust. During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions -- on the private side, firms with trustworthy brands, and on the public side, regulatory bodies -- that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society. Coming back to today, we are currently in the middle of one of these 50-year surges -- the paradigm of networked information -- and that we are roughly in the middle of the above graph -- we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies. A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety. Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era. === Footnotes: