This is part 3 in a series of posts I'm developing into a white paper on "Regulation 2.0" for the Program on Municipal Innovation Harvard Kennedy School of Government. For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you! They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today's social/collaborative web platforms. Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement. Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets. As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another. In 2010, my colleague Brad Burnham
This is part 3 in a series of posts I'm developing into a white paper on "Regulation 2.0" for the Program on Municipal Innovation Harvard Kennedy School of Government. For many tech industry readers of this blog, these ideas may seem obvious, but they are not intended for you! They are meant to help bring a fresh perspective to public policy makers who may not be familiar with the trust and safety systems underpinning today's social/collaborative web platforms. Twice a year, a group of regulators and policymakers convenes to discuss their approaches to ensuring trust, safety and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement. Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decisionmaking and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets. As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio & video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust & safety, security and legal officers and teams convene to learn from one another. In 2010, my colleague Brad Burnham
suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success. Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over. As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea. So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them:
Built on information:
The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century.
Trust by default (but verify):
Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go.
Busier is better:
It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems -- and eventually all regulatory systems that are built on software and data -- work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see.
Responsive policy development:
Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad. If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information. So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.
For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government. This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available. An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop. I've had the honor of attending this meeting a few times in the past, and I must say it's inspiring and encouraging to see urban leaders from across the US come together to learn from one another. One of the PMI's latest projects is an initiative on regulatory reform -- studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary. As part of this initiative, I've been writing up a short white paper on "Regulation 2.0" -- the idea that government can adopt some of the "regulatory" techniques pioneered by web platforms to achieve trust and safety at scale. Over the course of this week, I'll publish my latest drafts of the sections of the paper. Here's the outline I'm working on:
Regulation 1.0 vs. Regulation 2.0: an example
Context: technological revolutions and the search for trust
Today’s conflict: some concrete examples
Web platforms as regulatory systems
Regulation 2.0: applying the lessons of web platform regulation to the real world
Section 1 will be an adaptation of this post from last year. My latest draft of section 2 is below. I'll publish the remaining sections over the course of this week. As always, any and all feedback is greatly appreciated! ====
I wrote earlier this week about how life is, generally, hard. There's no question about that. One of my favorite things about the Internet, and probably the most exciting thing about working in venture capital, is being around people who are working to re-architect the world to make hard things easier. And by easier, I mean: by designing clever social / technical / collaborative hacks that redesign the problem and the solution. Yesterday, I was out in SF for USV's semiannual Trust, Safety and Security summit -- Brittany runs USV portfolio summits twice a month and one of the ones I don't miss is this one. It brings together folks working on Trust and Safety issues (everything from fraud, to bullying, to child safety, to privacy) and Security issues (securing offices & servers; defending against hacker attacks, etc.). Everyone learns from everyone else about how to get better at all of these important activities. Trust, Safety and Security teams are the unsung heroes of every web platform. What they do is largely invisible to end users, and you usually only hear about them when something goes wrong. They are the ones building the internal systems that make it possible to buy from a stranger online, to get into someone's car, to let your kid use the internet. If web platforms were governments, they would be the legislature, law enforcement, national security, and social services. Often times at these summits, we bring in outside guests who have particular expertise in some area. At yesterday's summit, our guest was Alex Rice, formerly head of Product Security at Facebook, and now founder of HackerOne. Side note: it was fascinating to hear about how Facebook bakes security into every product and engineering team -- subject for a later post. For today: HackerOne is a fascinating platform that takes something really hard -- security testing -- and architects it to be (relatively) easy, by incentivizing the identification and closing out of security holes in web applications and open source projects. The magic of HackerOne is solving for incentives and awkwardness, on both sides (tech cos and security researchers). Security researchers are infamous for finding flaws in web platforms, and then, if the platforms don't respond and fix it, going public. This is only a semi-effective system, and it's very adversarial. HackerOne solves for this by letting web platforms sign up (either in public or private) and attract hackers/researchers, and mediating the process of identifying, fixing, and publicizing bugs, and paying out "bug bounties" to the hackers. Platforms get stronger, hackers get paid. In the year that it's been operating, HackerOne has solved over 5,000 bugs and paid out over $1.6mm in bug bounties. Thinking about this, it strikes me that there are a few common traits of platforms that successfully re-architect something from hard --> easy: Structure and incentives: The secret sauce here mediating the tasks in a new way, and cleverly building incentives for everyone to participate. Companies don't like to admit they might have security holes. They don't like to engage with abrasive outside researchers. Email isn't a very accountable mode of communication for this. But HackerOne is figuring out how to solve for that -- if every company has a HackerOne page, there's nothing to fear about having one. Building a workflow around bug finding / solving / publicizing solves a lot of practical problems (like making payments and getting multi-party sign off on going public). Money that's small for a big company is big for an individual researcher -- one hacker earned $20k in bug bounties in a single month, for a single company, recently Essentially, HackerOne is doing to security bugs what StackOverflow has done for technical Q&A: take a messy, hard, unattractive problem with a not-very-effective solution and re-architect it to be easy, attractive and magical. Vastly broadening the pool of participants: After the summit, I asked Alex how old the youngest successful bug finder on the platform is. Any guesses? 11. Right: an 11 year old found a security hole in a website and got paid for it. Every successful hard --> easy solution on the internet does this. Another of my favorite examples is CrowdMed, where a community of solvers makes hard medical diagnoses that other specialists could not -- 70% of the solvers are not doctors. (They typically solve it with an "oh, my friend has those symptoms; maybe it's ____" approach, which you can only do at web scale). Deep personal experience: It takes a lot of subject matter expertise to get these nuances right. It makes sense that Alex was a security specialist, that Joel at stack overflow has been building developer tools for nearly two decades, and that Jared at CrowdMed was inspired by his own sister's experience with a rare, difficult-to-diagnose disease. I would like to think that it's also possible to do this without that deep expertise, but it seems clear that it helps a lot. The fact that it's not only possibly to make hard things easy, but that smart people everywhere are building things that do it right now, is what gets gets me going every day.
suggesting that web platforms are in many ways more like governments than traditional businesses. This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success. Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems & teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize & transact, the party is over. As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach: watch what’s happening and then correct for bad behavior, is the central idea. So: what characterizes these “regulatory” systems? There are a few common characteristics that run through nearly all of them:
Built on information:
The foundational characteristic of these “internet regulatory systems” is that they wouldn’t be possible without large volumes of real-time data describing nearly all activity on the platform (when we think about applying this model to the public sector this raises additional concerns, which we’ll discuss later). This characteristic is what enables everything that follows, and is the key distinguishing idea between these new regulatory systems from the “industrial model” regulatory systems of the 20th century.
Trust by default (but verify):
Once we have real-time and relatively complete information about platform/community activity, we can radically shift our operating model. We can then, and only then, move from an “up front permission” model, to a “trust but verify” model. Following from this shift are two critical operating models: a) the ability to operate at a very large scale, at low cost, and b) the ability to explicitly promote “innovation” by not prescribing outcomes from the get go.
Busier is better:
It’s fascinating to think about systems that work better the busier they are. Subways, for instance, can run higher-frequency service during rush hour due to steady demand, thereby speeding up travel times when things are busiest. Contrast that to streets which perform the worst when they are needed most (rush hour). Internet regulatory systems -- and eventually all regulatory systems that are built on software and data -- work better the more people use them: they are not only able to scale to handle large volumes, but they learn more the more use they see.
Responsive policy development:
Now, given that we have high quality, relatively comprehensive information, we’ve adopted a “trust but verify” model that allows for many actors to begin participating, and we’ve invited as much use as we can, we’re able to approach policy development from a very different perspective. Rather than looking at a situation and debating hypothetical “what-ifs”, we can see very concretely where good and bad activity is happening, and can begin experimenting with policies and procedures to encourage the good activity and limit the bad. If you are thinking: wow, that’s a pretty different, and powerful but very scary approach, you are right! This model does a lot of things that our 20th century common sense should be wary of. It allows for widespread activity before risk has been fully assessed, and it provides massive amounts of real-time data, and massive amounts of power, to the “regulators” who decide the policies based on this information. So, would it be possible to apply these ideas to public sector regulation? Can we do it in such a way that actually allows for new innovations to flourish, pushing back against our reflexive urge to de-risk all new activities before allowing them? Can & should the government be trusted with all of that personal data? These are all important questions, and ones that we’ll address in forthcoming sections. Stay tuned.
For the past several years, I have been an advisor to the Data-Smart City Solutions initiative at the Harvard Kennedy School of Government. This is a group tasked with helping cities consider how to govern in new ways using the volumes of new data that are now available. An adjacent group at HKS is the Program on Municipal Innovation (PMI), which brings together a large group of city managers (deputy mayors and other operational leaders) twice a year to talk shop. I've had the honor of attending this meeting a few times in the past, and I must say it's inspiring and encouraging to see urban leaders from across the US come together to learn from one another. One of the PMI's latest projects is an initiative on regulatory reform -- studying how, exactly, cities can go about assessing existing rules and regulations, and revising them as necessary. As part of this initiative, I've been writing up a short white paper on "Regulation 2.0" -- the idea that government can adopt some of the "regulatory" techniques pioneered by web platforms to achieve trust and safety at scale. Over the course of this week, I'll publish my latest drafts of the sections of the paper. Here's the outline I'm working on:
Regulation 1.0 vs. Regulation 2.0: an example
Context: technological revolutions and the search for trust
Today’s conflict: some concrete examples
Web platforms as regulatory systems
Regulation 2.0: applying the lessons of web platform regulation to the real world
Section 1 will be an adaptation of this post from last year. My latest draft of section 2 is below. I'll publish the remaining sections over the course of this week. As always, any and all feedback is greatly appreciated! ====
I wrote earlier this week about how life is, generally, hard. There's no question about that. One of my favorite things about the Internet, and probably the most exciting thing about working in venture capital, is being around people who are working to re-architect the world to make hard things easier. And by easier, I mean: by designing clever social / technical / collaborative hacks that redesign the problem and the solution. Yesterday, I was out in SF for USV's semiannual Trust, Safety and Security summit -- Brittany runs USV portfolio summits twice a month and one of the ones I don't miss is this one. It brings together folks working on Trust and Safety issues (everything from fraud, to bullying, to child safety, to privacy) and Security issues (securing offices & servers; defending against hacker attacks, etc.). Everyone learns from everyone else about how to get better at all of these important activities. Trust, Safety and Security teams are the unsung heroes of every web platform. What they do is largely invisible to end users, and you usually only hear about them when something goes wrong. They are the ones building the internal systems that make it possible to buy from a stranger online, to get into someone's car, to let your kid use the internet. If web platforms were governments, they would be the legislature, law enforcement, national security, and social services. Often times at these summits, we bring in outside guests who have particular expertise in some area. At yesterday's summit, our guest was Alex Rice, formerly head of Product Security at Facebook, and now founder of HackerOne. Side note: it was fascinating to hear about how Facebook bakes security into every product and engineering team -- subject for a later post. For today: HackerOne is a fascinating platform that takes something really hard -- security testing -- and architects it to be (relatively) easy, by incentivizing the identification and closing out of security holes in web applications and open source projects. The magic of HackerOne is solving for incentives and awkwardness, on both sides (tech cos and security researchers). Security researchers are infamous for finding flaws in web platforms, and then, if the platforms don't respond and fix it, going public. This is only a semi-effective system, and it's very adversarial. HackerOne solves for this by letting web platforms sign up (either in public or private) and attract hackers/researchers, and mediating the process of identifying, fixing, and publicizing bugs, and paying out "bug bounties" to the hackers. Platforms get stronger, hackers get paid. In the year that it's been operating, HackerOne has solved over 5,000 bugs and paid out over $1.6mm in bug bounties. Thinking about this, it strikes me that there are a few common traits of platforms that successfully re-architect something from hard --> easy: Structure and incentives: The secret sauce here mediating the tasks in a new way, and cleverly building incentives for everyone to participate. Companies don't like to admit they might have security holes. They don't like to engage with abrasive outside researchers. Email isn't a very accountable mode of communication for this. But HackerOne is figuring out how to solve for that -- if every company has a HackerOne page, there's nothing to fear about having one. Building a workflow around bug finding / solving / publicizing solves a lot of practical problems (like making payments and getting multi-party sign off on going public). Money that's small for a big company is big for an individual researcher -- one hacker earned $20k in bug bounties in a single month, for a single company, recently Essentially, HackerOne is doing to security bugs what StackOverflow has done for technical Q&A: take a messy, hard, unattractive problem with a not-very-effective solution and re-architect it to be easy, attractive and magical. Vastly broadening the pool of participants: After the summit, I asked Alex how old the youngest successful bug finder on the platform is. Any guesses? 11. Right: an 11 year old found a security hole in a website and got paid for it. Every successful hard --> easy solution on the internet does this. Another of my favorite examples is CrowdMed, where a community of solvers makes hard medical diagnoses that other specialists could not -- 70% of the solvers are not doctors. (They typically solve it with an "oh, my friend has those symptoms; maybe it's ____" approach, which you can only do at web scale). Deep personal experience: It takes a lot of subject matter expertise to get these nuances right. It makes sense that Alex was a security specialist, that Joel at stack overflow has been building developer tools for nearly two decades, and that Jared at CrowdMed was inspired by his own sister's experience with a rare, difficult-to-diagnose disease. I would like to think that it's also possible to do this without that deep expertise, but it seems clear that it helps a lot. The fact that it's not only possibly to make hard things easy, but that smart people everywhere are building things that do it right now, is what gets gets me going every day.
Technological revolutions and the search for trust
The search for trust amidst rapid change, as described in the
, is not a new thing. It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game. We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century. Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1] This sounds very big and scary, of course, and it is. Perez’s study of technological revolutions over the past 250 years -- five distinct great surges lasting roughly fifty years each -- shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations. It’s a wild, turbulent and unpredictable process. Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern: First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession. Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes. And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.
So essentially: wild growth, societal disruption, then readjustment and broad adoption. Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:
“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]
In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one. Same goals, new methods. As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution. In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:
Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.
This sounds a lot like today, right? A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust. During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions -- on the private side, firms with trustworthy brands, and on the public side, regulatory bodies -- that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society. Coming back to today, we are currently in the middle of one of these 50-year surges -- the paradigm of networked information -- and that we are roughly in the middle of the above graph -- we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies. A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety. Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era. === Footnotes:
, is not a new thing. It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game. We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century. Economist Carlota Perez describes these waves of massive technological change as “great surges”, each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[1] This sounds very big and scary, of course, and it is. Perez’s study of technological revolutions over the past 250 years -- five distinct great surges lasting roughly fifty years each -- shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws and regulations. It’s a wild, turbulent and unpredictable process. Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern: First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession. Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes. And of course, by the time each fifty-year surge concluded, the seeds of the next one had been planted.
So essentially: wild growth, societal disruption, then readjustment and broad adoption. Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above), as the percolating of the “common sense” throughout other aspects of society:
“the new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[2]
In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one. Same goals, new methods. As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution. In The Search for Order: 1877-1920[2], Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:
Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.
This sounds a lot like today, right? A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust. During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions -- on the private side, firms with trustworthy brands, and on the public side, regulatory bodies -- that took on the burden of ensuring public safety and the necessary trust & security to underpin the economy and society. Coming back to today, we are currently in the middle of one of these 50-year surges -- the paradigm of networked information -- and that we are roughly in the middle of the above graph -- we’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. What this paper is about, then, is how we might consider adopting the tools & techniques of the networked information paradigm to achieve the societal goals previously achieved through the 20th century’s “industrial” regulations and public policies. A “2.0” approach, if you will, that adopts the “common sense” of the internet era to build a foundation of trust and safety. Coming up: a look at some concrete examples of the tensions between the networked information era and the industrial era; a view into the world of web platforms’ “trust and safety” teams and the model of regulation they’re pioneering; and finally, some specific recommendations for how we might envision a new paradigm for regulation that embraces the networked information era. === Footnotes: