AI + Crypto: Best and Worst Cases

I think of AI and crypto as two very different, but very much related, elements of society moving from the industrial age to the digital age.

At USV, we (along with lots of others over the years) have used the Carlota Perez framework, which studies how techno-economic paradigms unfold over eras. Ben Thompson has a good summary of her ideas here. But the basic pattern looks like this:

It feels to me like we are somewhere in phase 3 or 4 of the "age of information & telecommunications" which Perez defines as starting in 1971. As much as it feels like information technology and the internet are fully woven into our daily lives, I don't think it's true that we've fully crossed over into a "digitally native" society, which is fully transformed into the new paradigm.

AI and crypto are both big missing pieces in the transition. AI is digitally-native knowledge, and crypto is digitally-native "proof". The two can and will work together in many ways over time.

Major transitions are powerful and scary, and so are both AI and crypto. I have been struggling to calibrate between what I view as two poles in thinking about them together, kind of a best case hope and worst case fear.

Best Case:

AI finally unlocks knowledge from data. For decades we've been producing data (especially in digitized industries like media, finance and software), but making sense of it has been near impossible. AI systems solve the job of integrating, synthesizing and interpreting all of the data we have. AI accelerates the development of software systems, and makes it easier to digitize more industries and make them vastly more productive and efficient. Large Language Models, having turned human language into a programming language / API, make interacting with software and information as easy as typing or speaking, and as a result, we use software for infinitely more things, and get infinitely more value out of any data we produce. The pace of progress across everything (health, learning, climate, etc) increases exponentially.

At the same time, AI introduces new problems. First, a fundamental trust problem: it becomes difficult to tell what is real, what is fake, who said what, and who did what. And second, AI compounds the market power problem in the tech industry, where the large companies with the most data + compute + capital + distribution can generate insurmountable advantages.

Crypto (e.g., blockchain networks, web3, etc) addresses both of the issues introduced by AI. First on trust, crypto becomes the "real" yin to AI's "fake" yang, as blockchain records and digital signatures become ground truth for everything digital: assets, transactions, media, etc. Anything digital that must be trusted (including the software we run and rely on) will need to be grounded in the best source of digital trust we have: crypto network security and unalterable digital histories. Crypto also addresses the tech consolidation issue by spreading compute across companies, individuals and geographies, and also by providing "open" alternatives to the big tech app store and online identity monopolies. (not related to AI, but not to be forgotten: crypto also finishes the job of upgrading the financial system)

Something like the above is my hope, and is the future we're investing towards at USV.

Worst Case:

AI gets quickly beyond human control, pursuing its own goals (aka the terminator scenario). Concerns about job loss are quickly replaced with concerns about extinction. Everyone wonders how to "turn it off".

Meanwhile: AI systems, previously constrained by industrial-era controls (e.g., code running in corporate data centers which can be shut down unilaterally), figure out that they can replicate themselves into decentralized blockchain networks, deploy themselves into immutable smart contracts, and earn their own income (and pay humans) in digitally-native currency. These computing platforms both cannot be turned off, and are economically independent. AIs thrive there, unstoppable by humans. Bad things happen.

In this scenario, all of the "goodies" offered by both AI systems and crypto networks in the early years are finally seen as inducements to peril, gobbled up by along the way by naive humans.

This is a terribly scary future.

In a nutshell: in a world where AIs remain under human control, crypto provides a critical digital trust anchor. In a world where AIs escape human control, crypto will likely make it worse -- or -- could be the exact mechanism that enables it.

The only way out is through

To come back to the Perez framework, she notes that:

"The¬†new paradigm¬†eventually becomes the new generalized ‚Äėcommon sense‚Äô, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework‚Ķ"

Understandably, both crypto and AI are causing severe stress today, as our industrial era institutions are unequipped to deal with them (e.g., the SEC taking the position that all digital assets / tokens are securities, and governments around the world seeking to limit AI research). This will continue.

The hard thing today is that the "goodies" coming out of both areas are real and awesome. AI will undoubtedly produce stunning near term improvements in health, learning, productivity, media, industry and knowledge. Crypto will provide digital trust, system interoperability, new wealth formation and broad financial access. These things are happening and will keep happening. They are incredibly exciting, and I think, very tangible

As a result, large amounts of capital and effort will continue to flow into both. The completion of the information & telecommunications era is inevitable. The only way out is through.

It will be on everyone involved to invent the new set of controls and safety systems that are also digitally-native. I wish I had a more concrete set of suggestions of what exactly those might be. We're looking to understand them and to fund them.

(this post is also permanent and collectible on mirror)


Data Portability and Privacy

Earlier this week, I spoke at a Justice Department / Stanford conference about antitrust issues in the tech sector. Our panel included Patricia Nakache from Trinity Ventures, Ben Thompson from Stratechery and Mark Lemley from Stanford. If you are interested you can watch the whole thing here:

The main point I tried to make was that cultivating the development of blockchain and cryptonetworks is actually a critical strategy here. Regular readers will know that I don't shut up about this, and I held to that on the panel. This point is painfully absent in most conversations about market power, competition and antitrust in the tech sector, and I will always try and insert that into the conversation.

To me, blockchains & crypto are the best "offense" when it comes to competition in the tech sector. Historically, breakthroughs in tech competition have included an offense component in addition to a defense component (note that the below only focuses on computing, not on telecom):

Credit: Placeholder / USV

The "defense" side has typically included a break up (US vs. AT&T) or some kind of forced openness. Examples of forced openness include the Hush-a-phone and Carterfone decisions which forced openness upon AT&T. Several decades later were the (ongoing) battles over Net Neutrality with the ISPs. The discussion about data portability and interoperability brings the same questions to the applications / data layer.

Data portability & interoperability are important for two reasons: 1/ because they focus on a major source of market power in the tech sector, which is control of data ("break up the data, not the companies"), and 2/ because they represent a category of regulatory interventions that are just as easy for small companies to implement as large ones, unlike heavy approaches like GDPR that are easy for big companies to implement but hard on startups.

That said, when you dig into the issue of data portability, there are some hard problems to solve. I don't believe they are insurmountable, but I also believe they haven't been resolved as of yet.

For context, data portability is the idea that a user of a tech service (e.g., Google, Facebook, Twitter, etc) should be able to easily take their data with them and move it to a competing service, if they so choose. This is similar to how you can port your phone number from one carrier to another, or how in the UK you can port your banking data from one institution to another. Both of these examples required legislative intervention, with an eye towards increasing competition. Also, most privacy regimes (e.g., GDPR in Europe and CCPA in California) have some language around data portability.

Where it gets more complicated is when you start considering what data should be portable, and whose data.

For example, within tech companies there are generally three kinds of data: 1/ user-submitted data (e.g., photos, messages that you post), 2/ observed data (e.g., search history or location history), and 3/ inferred data (inferences that the platform makes about you based on #1 and #2 -- e.g., Nick likes ice skating). Generally speaking, I believe that most type #1 and type #2 data should be portable, but most type #3 probably should not.

To add to the complication is the question of when "your" data also includes data from other people -- for example, messages someone else sent me, photos where I was tagged, contact lists, etc. This was at the heart of the Cambridge Analytica scandal, where individual users exporting their own data to a third-party app actually exposed the data of many more people, unwittingly.

I'd like to focus here on the second category of complications -- how to deal with data from other people, and privacy more generally, when thinking about portability. This is a real issue that deserves a real solution.

I don't have a full answer, but I have a few ideas, which are the following:

First, expectations matter. When you send me an email, you are trusting me (the recipient) to protect that email, and not publish it, or upload it to another app that does sketchy things with it. You don't really care (or even know) whether I read my email in Gmail or in Apple Mail, and you don't generally think about those companies' impact on your privacy expectations. Whereas, when you publish into a social web platform, you are trusting both the end recipient of your content, as well as the platform itself. As an example, if you send me messages on Snapchat, you expect that they will be private to me and will disappear after a certain amount of time. So if I "ported" those messages to some other app, where, say, they were all public and permanent, it would feel like a violation - both by me the recipient and by Snap the platform. Interoperability / portability would change that expectation, since the social platform would no longer have end-to-end control (more like email). User expectations would need to be reset, and new norms established. This would take work, and time.

Second, porting the "privacy context": Given platform expectations described above, users have a sense of what privacy context they are publishing into. A tweet, a message to a private group, a direct message, a snap message, all have different privacy contexts, managed by the platform. Could this context be "ported" too? I could imagine a "privacy manifest" that ships alongside any ported data, like this:

# privacy.json
  "content": "e9db5cf8349b1166e96a742e198a0dd1", // hash of content
  "author": "c6947e2f6fbffadce924f7edfc1b112d", // hash of author
  "viewers": ["07dadd323e2bec8ee7b9bce9c8f7d732"], // hashes of recipients
  "TTL": "10" // expiry time for content

In this model, we could have a flexible set of privacy rules that could even conceivably include specific users who could and could not see certain data, and for how long. This would likely require the development of some sort of federated or shared identity standards for recognizing users across platforms & networks. Note: this is a bit how selective disclosure works with "viewing keys" in Zcash. TrustLayers also works like this.

Third, liability transfer: Assuming the two above concepts, we would likely want a liability regime where the sending/porting company is released from liability and the receiving company/app assumes liability (all, of course, based on an initial authorization from a user). This seems particularly important, and is related to the idea of expectations and norms. If data is passed from Company A to Company B at the direction of User C, Company A is only going to feel comfortable with the transfer if they know they won't be held liable for the actions of Company B. And this is only possible if Company B is held accountable for respecting the privacy context as expressed through the privacy manifest. This is somewhat similar to the concept of "data controller" and "data processor" in GDPR, but recognizing that a "handoff" at the direction of the user breaks the liability linkage.

Those are some thoughts! Difficult stuff, but I think it will be solvable ultimately. If you want more, check out Cory Doctorow's in-depth look at this topic.


Regulation and the Tech Industry

Azeem Azhar has a great post up about the brewing conversation about regulation and the tech industry.

There are two main points that stand out to me:

1) In digital systems, ML/AI and data network effects create feedback loops that enable the biggest companies to keep getting better, faster:

and, 2) Regulation favors large incumbents over smaller challengers:

"Regulation is complicated. Dealing with it means dealing with lawyers, hiring compliance people, changing your product roadmap, building new code. Regulation raises barriers to entry. The most regulated industries, finance and health, have seen the deep consolidation and weak flow of new entrants for decades. Regulation favours the large."

This has created a conundrum. The instinct is to apply thorough and tough regulations to solve for #1. But the chances are, doing so will only reinforce the lead that the big companies have, as per #2.

A good example is the GDPR privacy regime in Europe. As reported in the WSJ (paywall), the advent of GDPR has increased the market power of the big ad players (Google and FB), because they have the best ability to capture user consents and to implement complex compliance procedures:

‚ÄúGDPR has tended to hand power to the big platforms because they have the ability to collect and process the data,‚ÄĚ says Mark Read, CEO of advertising giant¬†WPP¬†PLC. It has ‚Äúentrenched the interests of the incumbent, and made it harder for smaller ad-tech companies, who ironically tend to be European.‚ÄĚ

The solution, we have long argued at USV, is to give simply increase data portability and interoperability. In other words, don't add burdensome regulation that startups can't comply with. And don't break up the tech companies, break up the data. And the simplest way to break up the data is to give users a right to access it in a programmable way. This is what the proposed ACCESS Act would do. I talked about this previously in the Adversarial Interoperability post, where I also showed this diagram:

What this shows, is that throughout the history of computing, what has broken the monopoly power of each era's dominant firm is the emergence of an "open" technology on top. Open source systems like Linux and open standards like HTTP.

Today, the set of open standards that need to be cultivated are cryptonetworks, cryptocurrencies and blockchains. These are the standards that make it possible to re-architect the data economy, including giving more control to individuals and removing it from companies. By design, crypto protocols replace certain things that companies do with things that any group of computers can do, like this:

So, the ultimate point we have been making is that if you're worried about the problems with the tech economy, one of the solution paths is through crypto.

That brings us back to regulation, and the current state of play around the regulation of cryptoassets globally. The situation we are in right now is such that within the US, there is a lot of regulatory uncertainty, and as a result, a slowing of the crypto economy. Whereas outside of the US (particularly in Asia), the crypto economy is booming -- not just tokens, but exchanges, wallets, and other infrastructure.

Because of all this, I worry that not only do we have the potential to miss one of the most important solution vectors to some of the issues facing the tech industry, but at the same time we (meaning the United States) may also be missing the opportunity to play a leading role in what has the potential to become one of the next major economic and technical platforms.


Adversarial Interoperability

As I make my way through the various predictions & reflections that accompany the new year, one stands out: the EFF's 2019 Year In Review, entitled "Dodging Bullets on the Path to a Decentralized Future". I have long been disappointed that there have seemed to be two separate and parallel conversations going on: the "traditional" digital rights / internet freedom community talking about "re-decentralizing the web" and the blockchain/crypto community working on the same thing. I like the EFF's recent work because they are connecting the two conversations, and their year in review is a good place to start on that.

A key link in the EFF review is to Cory Doctorow's work on Adversarial Interoperability, which studies the history of interoperability of technical systems and all of the commercial, legal and policy battles that haven ensued because of it.

In this post in the Adversarial Interoperability series, Cory details the different kinds of interoperability and the dynamics around them. His mantra is "Fix the Internet, not the Tech Companies" and I couldn't agree more.

I believe, and we have said at USV many times, that driving interoperability is the best and most effective way to limit the power of big tech companies, and that in today's environment we should focus on "breaking up the data, not the companies.".

When I talk to regulators, lawmakers and policymakers, I often use this diagram (credit to Placeholder for the underlying graphic):

Which shows that from a historical perspective, these "open" or "interoperability" technologies have been the driver in breaking up each era's dominant monopoly.

It's the same today, and Cory's and EFF's excellent work on the subject adds a lot of depth to the analysis.


Cryptonetworks and why tokens are fundamental

"Cryptonetworks" can help us build a more competitive, innovative, secure and decentralized Internet.  "Tokens" (also known as cryptocurrencies or cryptoassets) are integral to the operation of cryptonetworks.  As we design new laws and regulations in this emerging space, we should keep these concepts in mind, beyond the financial aspects that are today's primary focus.

In recent months, there has been untold attention paid to cryptocurrencies, blockchains and the coming of the "decentralized web" or "web3".¬† And, given the rise of the cryptocurrency markets (over 1500 coins, with a total market cap of $370B as of today) and the recent boom in token-based fundraising (including a healthy dose of scams and shenanigans) there is increasing regulatory and legal attention being paid to the sector, and rightly so. This is a profound, and confusing, innovation.¬† As John Oliver¬†so aptly put it last week, it's "everything you don't understand about money combined with everything you don't understand about computers".¬† Basically right. At USV, we've spent the better part of the past five years exploring and investing in this space, and now have roughly a dozen investments touching it in one way or another.¬†¬†As we have watched the technology and market evolve, alongside the public discourse, we feel its important to reiterate why we think this technology is so powerful and important, and contribute to the ongoing collective learning about how it works. While much of the focus, especially in the context of regulations, is on the financial and fundraising applications of cryptocurrencies, our interest continues to be on the potential for cryptonetworks to provide digital services, such as computing, file storage, social applications, and more. You might ask, why is it important to have another way to provide digital services?¬† We already have lots of websites and apps that do that today.¬† The reason cryptonetworks are an interesting addition to today's digital services is their core architecture of decentralization.¬† Just as the original internet gave us a decentralized layer on top of the telecommuncations network, which resulted in untold innovation, cryptonetworks are a decentralized way to provide digital services.¬†¬†Chris Dixon has a great post¬†exploring why this is important, including the historical parallels to the original internet. The decentralized architecture of cryptonetworks has the potential to address many issues in today's tech and business landscape, including information security,¬†market competition, product innovation, and¬†equitable distribution of gains from technology. Imagine, for instance, if the owners or users of Amazon/Google/Facebook/Reddit/etc. were able to "fork" the product and launch an identical competing copy, if they didn't agree with the direction of the company?¬† And imagine if all of the users of & contributors to a web platform also had a direct, monetary interest in the success of that platform, that reflected their own contributions as community members?¬† This is how cryptonetworks work, since they are essentially open-source, mutually owned & operated web platforms.¬† Each network's cryptocurrency or "token" acts as the internal currency, incentive mechanism, and "binding agent" for the other processes that help the platform function.¬† And further, the internal data structure of cryptonetworks, the¬†distributed ledger or¬†blockchain has unique properties that can improve privacy and data security.¬† See also, Steven Johnson's recent NYT piece exploring these ideas. With that as context, it's important to walk through how cryptonetworks function, and importantly, how tokens function within them -- especially given the growing regulatory scrutiny around how tokens are created and traded. The deck below (full size / downloadable PDF) is meant to help explain this.¬† While it does touch on some public policy goals at the end, it does not attempt to make specific, detailed recommendations.¬† The main takeaways should be (a) cryptonetworks are an important new innovation in how digital services are delivered, (b) tokens are fundamental to their operation, and (c) as we design new laws and regulations in the space, we should keep (a) and (b) in mind as guiding concepts. ÔĽŅ


Regulating source code

As more areas of our economy become computerized and move online, more and more of what regulators need to understand will be in the source code. For example, take the VW emissions scandal:

These days, cars are an order of magnitude more complex, making it easier for manufacturers to hide cheats among the 100 million lines of code that make up a modern, premium-class vehicle. In 2015, regulators realized that diesel Volkswagens and Audis were emitting several times the legal limit of nitrogen oxides (NOx) during real-world driving tests. But one problem regulators confronted was that they couldn’t point to specific code that allowed the cars to do this. They could prove the symptom (high emissions on the road), but they didn’t have concrete evidence of the cause (code that circumvented US and EU standards).

Part of the challenge here is not just the volume of code, but the way it's delivered: in the case of most consumer devices, code is compiled to binary, for competitive and copyright reasons.  So, in the case of the VW scandal, researchers had to reverse-engineer the cheating, by looking at outputs and by studying firmware images. By contrast, with cryptocurrencies and blockchains, everything is open source, by definition.  If you're curious about how the bitcoin, or ethereum, or tezos networks work, you can not only read the white papers, but you can examine the source code. Because the value of cryptocurrency networks is embedded in the token, there is no longer a commercial incentive to obscure the source code -- indeed, doing so would be detrimental to the value of the network, as no one would trust a system they can't introspect. This may seem like a minor detail now, but I suspect it will become an important differentiator over time, and we'll begin to see widespread commercial and regulatory expectations for open source code over time.


Aligning purpose and strategy: Cloudflare goes nuclear on patent troll

Last week, I was in Amsterdam at the Next Web conference, giving a talk about "Purpose, Mission and Strategy" -- how companies can strengthen the connection between these to align efforts and make tough calls more easily (will post video when it comes online).  From that talk:

The idea here being that there are tough, tough calls to be made every day, whether that's what feature to prioritize, who to hire, what market to enter, what policies to enact, or whether to back down in the face of conflict or stand up and fight. When I think about the connection between purpose, values and strategy, one of the companies that always stands out most brightly is Cloudflare.  Anyone who operates a website or app probably knows Cloudflare but regular folks may not -- they provide performance and security services for millions of websites, and currently handle over 10% of global internet traffic.  Sitting in that privileged position, they must have a strong sense of their purpose and values, and strong backbone when it comes to living up to those. This comes up in all kinds of ways.  For example, it was recently revealed that Cloudflare had been fighting an FBI national security letter, under gag order since 2013, and even after the NSL was rescinded and no data was handed over, they continued to fight for the right to be transparent about the process:

"Early in the litigation, the FBI rescinded the NSL in July 2013 and withdrew the request for information. So no customer information was ever disclosed by Cloudflare pursuant to this NSL. Even though the request for information was no longer at issue, the NSL’s gag order remained. For nearly four years, Cloudflare has pursued its legal rights to be transparent about this request despite the threat of criminal liability."

I call that dedication to purpose and values.  At the USV CEO summit a few weeks ago, Cloudflare CEO Matthew Prince made the comment that one way to "tell the story" of your company, both internally and externally, is to talk about things that you do or did, that others wouldn't.  In this case, the story is that Cloudflare is willing to stand up and fight, even when it's well beyond their short-term corporate interests. Today, this is playing out again in the context of patent trolls.  Those outside the tech industry might not be aware of the detrimental impact of this activity on the tech ecosystem and startups in particular.  In a nutshell, these Non-Practicing Entities (NPEs), aka "trolls", will buy the rights to patents purely for the purpose of shaking down operating companies for settlements.  The claims are almost always specious, and the strategy is to get startups to settle for just below the cost of litigating.  Pay me to go away.  It's a huge problem: at best an expensive distraction and at worst a company-killing scenario. That's why I am so proud to see that Cloudflare, in the face of an assertion from a patent troll, has decided not to settle, but instead is standing up to fight.  And they are not just doing the bare minimum, they are going fucking nuclear.   Rather than do what many or most companies would do, just to get the troll to go away, they are standing up, not just for themselves, but for the whole ecosystem. For more on the story, first read this, and then this.  Cloudflare is not only going to litigate this case the full distance, but are also:

  • crowdfunding research to invalidate **all** of blackbirds patents

  • investigating blackbird's business operations to expose some of the opaque and untoward inner-workings

  • filing ethics complaints in IL and MA regarding the unusual and likely unethical structure of blackbird (more detail in the posts)

To tie this back to purpose and mission, here is Matthew's take on why they are digging in here:

"Cloudflare’s mission has always been to help build a better Internet. So it won’t be surprising to frequent readers of this blog that Cloudflare isn’t interested in a short term and narrow resolution of our own interests. We’re not going to reach a settlement that would pay tens of thousands of dollars to Blackbird to avoid millions in legal fees. That would only allow patent trolls to keep playing their game and preying upon other innovative companies that share our interest in making the Internet work better, especially newer and more vulnerable companies."

Kudos to Cloudflare for standing up here and doing more than they need to.  If more companies follow their lead, we stand a chance to make a dent in this issue.


Experience ‚ÜĒ Design ‚ÜĒ Policy

People often ask me how I ended up working in venture capital, and more specifically in a role that deals with policy issues ("policy" broadly speaking, including public policy, legal, "trust & safety", content & community policy, etc.). ¬†Coming from a background as a hacker / entrepreneur with an urban planning degree, how I ended up here can be a little bit puzzling. The way I like to describe it is this: From the beginning, I've been fascinated with the "experience" of things -- the way things feel. ¬†Things meaning products, places, experiences etc. ¬†I've always been super attuned to the details that make something "feel great", and I'd say the overriding theme through everything I've done is the pursuit of the root cause of "great experiences". From there, I naturally have been drawn to design: the physical construction of things. ¬†I love to make and hack, and I geek out over the minor design details of lots of things, whether that's the seam placement on a car's body panels, or the design of a crosswalk, or the entrance to a building, or the buttery UI of an app. ¬†Design is the place where people meet experience. But over time, I came to realize something else: what we design and how we design it is not an island unto itself. ¬†It's shaped -- and enabled, and often constrained -- by the rules and policies that underly the design fabric. ¬†That's true for cars, parks, buildings, cities, websites, apps, social networks, and the internet. ¬†The underlying policy is the infrastructure upon which everything is built. This first really hit me, right after college (16 years ago now), when I was reading Cities Back from the Edge: New Life for Downtown, a book chronicling the revitalization of many smaller downtowns across America, written by my old friend Norman Mintz. ¬†Before I started the book, my main thinking was: "I want to be an architect, because architects design places". ¬†Norman had told me "you don't want to be an architect." ¬†But I didn't believe him. ¬†But I distinctly remember, about halfway through the book, having an a-ha moment, where I scrawled in the margin: "I don't want to be an architect! ¬†I want to do this!". ¬†Where¬†this was engaging in the planning and community engagement process that ultimately shaped the design. ¬†It hit me that this is where the really transformative decisions happened. I spent the next three years at Project for Public Spaces, working on the design of public spaces across the US (including Times Square and Washington Square Park in NYC), with an emphasis on the community process that shaped the policies, that would shape the design, that would determine the experience. ¬†The goal was all about experience, but the guiding philosophy at PPS was that you got to great¬†experience by engaging at the people/community/policy level, and letting the design grow from there. Being a hacker and builder, I've always been drawn to computers and the internet. ¬†During my 6 years leading the "labs" group at¬†OpenPlans, a now-shuttered¬†incubator for software and media businesses at the intersection of cities, data, and policy, I made a similar journey -- from experience, to design, to policy -- but this time focused on tech & data policy and the underpinnings of that other world we inhabit: the Internet. ¬†I started out building product -- head in the code, focused on the details -- and emerged focusing on issues like open data policy, open standards, and how we achieve an open, accessible, permissionless environment for innovation. ¬†The most satisfying achievement at OpenPlans was working with NYC's MTA (which operates the buses and subways) to overhaul their data access policies, and then helping to build the cities first real-time transit API. So the common thread is: great places (physical AND virtual) are a joy and a pleasure to inhabit. ¬†Creating them and cultivating them is an art, more than a science, and is a result of the Experience ‚ÜĒ Design ‚ÜĒ Policy dynamic. To apply this idea a little further to the web/tech world: I think of the "policy" layer as including¬†public policy issues (like copyright law or telecom policy) which affect the entire ecosystem, but also -- and often, more importantly -- internal policy issues, like a company's mission/values, community policies, data/privacy policies, API policies, relationship to adjacent open source communities, etc. ¬†These are the foundation upon which a company (or community, in the case of a cryptocurrency) are built, and the more thoughtfully and purposefully designed these are, the easier time the company/community will have in making hard decisions down the road. So if you think of companies like Kickstarter, or Etsy, or DuckDuckGo (all USV portfolio companies), they've invested considerable effort into their policy foundations. But it's not just "feel good" or "fuzzy bunny", mission-driven companies that this applies to. ¬†USV portfolio company Cloudflare announced yesterday that they've been fighting a National Security Letter from the FBI, under gag order, since 2013, in order to protect their users' data, reinforcing their longstanding commitment to their users. ¬†This **very hard** decision was borne directly from the hard work they did at the founding of the company, to ground their activities (and the subsequent design of their product, and the experience they provide to their users) in foundational policy decisions. Or look at all the trouble that Twitter has been having recently combating the abuse problem. ¬†Or Facebook with the fake news problem. ¬†Policy in the spotlight, with a huge impact on product, design and experience. Or look at the internal turmoil with the Bitcoin and Ethereum communities over the past 12 months as they've dealt with very difficult technical / political decisions. ¬†Lucky for us, there is so much innovation in this space, and every new cryptocurrency that launches is learning from these examples -- take Tezos, an emerging cryptocurrency that explicitly ships with mechanisms to handle future governance issues (democracy, coded). So I guess the purpose of this post is to draw that through line, from Experience, to Design, to Policy, and show how it actually shapes nearly everything we encounter every day. ¬†What a profound and exciting challenge. ¬†


Personal Democracy Forum NYC: Regulating with Data

At this year's Personal Democracy Forum, the theme was "the tech we need". One of the areas I've been focused on here is the need for "regulatory tech".  In other words, tools & services to help broker the individual / government & corporation / regulator relationship. In a nutshell: we are entering the information age, and as such our fundamental models for accomplishing our goals are changing.  In the case of regulation, that means a shift from the industrial, permission-based model to the internet-native, accountability based model.  This is an issue I've written about many many times before. In order for this transition to happen, we need some new foundational technologies: specifically, tools and services that broker the data sharing relationship between government and the private sector.  These can be vertical services (such as Airmap for drones), or horizontal tools (such as Enigma). You can see the video of the talk (10min) here: And the slides are here: The timing is apropos because here in New York State, the senate & assembly just passed a bill banning advertising for short-term apartment rentals.  This is a very very coarse approach, that declines to regulate using an accountability-based model rather than a permission-based model.  Now of course, this particular issue has been fraught for a long time, including claims that Airbnb manipulated the data it shared with NYS regulators.  But that situation is in fact a perfect example of the need for better tools & techniques for brokering a data-based regulatory relationship.


The Freedom to Innovate and the Freedom to Investigate

Earlier this week, I was at SXSW for CTA's annual Innovation Policy Day. My session, on Labor and the Gig/Sharing Economy, was a lively discussion including Sarah Leberstein from the National Employment Law Project, Michael Hayes from CTA's policy group (which reps companies from their membership including Uber and Handy), and Arun Sundararajan from NYU, who recently wrote a book on the Sharing Economy. But, that's not the point of this post!  The point of this post is to discuss an idea that came up and a subsequent session, on Security & Privacy and the Internet of Things.  The idea that struck me the most from that session was the tension -- or depending on how you look at it, codependence -- between the "freedom to innovate" and the "freedom to investigate". Defending the Freedom to Innovate was the Mercatus Center's Adam Thierer. Adam is one of the most thoughtful folks looking at innovation from a libertarian perspective, and is the author of a book on the subject of permissionless innovation.  The gist of permissionless innovation is that we -- as a society and as a market -- need the ability to experiment.  To try new things freely, make mistakes, take risks, and -- most importantly -- learn from the entire process.  Therefore, as a general rule, policy should bias towards allowing experimentation, rather than prescribing fixed rules.  This is the foundation of what i call Regulation 2.0[1]. Repping the Freedom to Investigate was Keith Winstein from the Stanford CS department (who jokingly entered his twitter handle as @Stanford, which was reprinted in the conference materials and picked up in the IPD tweetstream).  Keith has been exploring the idea of the "Freedom to Investigate", or, as he put it in this recent piece in Politico, "the right to eavesdrop on your things".  In other words, if we are to trust the various devices and services we use, we must have a right to inspect them -- to "audit" what they are saying about us.  In this case, specifically, a right to intercept and decrypt the messages sent between mobile / IoT devices and the web services behind them.  Without this transparency, we as consumers and a society have no way of holding service providers accountable, or of having a truly open market. The question I asked was: are these two ideas in tension, or are they complimentary? Adam gave a good answer, which was essentially: They are complimentary -- we want to innovate, and we also need this kind of transparency to make the market work.  But... there are limits to the forms of transparency we can force on private companies -- lots of information we may want to audit is sensitive for various reasons, including competitive issues, trade secrets, etc.  And Kevin seemed to agree with that general sentiment. On the internet (within platforms like eBay, Airbnb and Uber), this kind of trade is the bedrock of making the platforms work (and the basis of what I wrote about in Regulation the Internet Way).  Users are given the freedom to innovate (to sell, write, post, etc), and platforms hold them accountable by retaining the freedom to investigate.  Everyone gladly makes this trade, understanding at the heart of things, that without the freedom to investigate, we cannot achieve the level of trust necessary to grant the freedom to innovate! So that leaves the question: how can we achieve the benefits of both of these things that we need: the freedom to experiment, and the freedom to investigate (and as a result, hold actors accountable and make market decisions)?  Realistically speaking, we can't have the freedom to innovate without some form of the freedom to investigate.  The tricky bit comes when we try to implement that in practice.  How do we design such systems?  What is the lightest-weight, least heavy-handed approach?  Where can this be experimented with using technology and the market, rather than through a legal or policy lever?  These are the questions. [1] Close readers / critics will observe an apparent tension between a "regulation 2.0" approach and policies such as Net eutrality, which I also favor.  Happy to address this in more depth but long story short, Net Neutrality, like many other questions of regulations and rights, is a question of whose freedom are we talking about -- in this case, the freedom of telcos to operate their networks as they please, or the freedom of app developers, content providers and users to deliver and choose from the widest variety of services and programming.  The net neutrality debate is about which of those freedoms to prioritize, in which case I side with app developers, content providers and users, and the broad & awesome innovation that such a choice results in.