Earlier this week, I spoke at a Justice Department / Stanford conference about antitrust issues in the tech sector. Our panel included Patricia Nakache from Trinity Ventures, Ben Thompson from Stratechery and Mark Lemley from Stanford. If you are interested you can watch the whole thing here:
https://twitter.com/trinityventures/status/1228085073435484161
The main point I tried to make was that cultivating the development of blockchain and cryptonetworks is actually a critical strategy here. Regular readers will know that I don't shut up about this, and I held to that on the panel. This point is painfully absent in most conversations about market power, competition and antitrust in the tech sector, and I will always try and insert that into the conversation.
To me, blockchains & crypto are the best "offense" when it comes to competition in the tech sector. Historically, breakthroughs in tech competition have included an offense component in addition to a defense component (note that the below only focuses on computing, not on telecom):

Credit: Placeholder / USV
The "defense" side has typically included a break up (US vs. AT&T) or some kind of forced openness. Examples of forced openness include the Hush-a-phone and Carterfone decisions which forced openness upon AT&T. Several decades later were the (ongoing) battles over Net Neutrality with the ISPs. The discussion about data portability and interoperability brings the same questions to the applications / data layer.
Data portability & interoperability are important for two reasons: 1/ because they focus on a major source of market power in the tech sector, which is control of data ("break up the data, not the companies"), and 2/ because they represent a category of regulatory interventions that are just as easy for small companies to implement as large ones, unlike heavy approaches like GDPR that are easy for big companies to implement but hard on startups.
That said, when you dig into the issue of data portability, there are some hard problems to solve. I don't believe they are insurmountable, but I also believe they haven't been resolved as of yet.
For context, data portability is the idea that a user of a tech service (e.g., Google, Facebook, Twitter, etc) should be able to easily take their data with them and move it to a competing service, if they so choose. This is similar to how you can port your phone number from one carrier to another, or how in the UK you can port your banking data from one institution to another. Both of these examples required legislative intervention, with an eye towards increasing competition. Also, most privacy regimes (e.g., GDPR in Europe and CCPA in California) have some language around data portability.
Where it gets more complicated is when you start considering what data should be portable, and whose data.
For example, within tech companies there are generally three kinds of data: 1/ user-submitted data (e.g., photos, messages that you post), 2/ observed data (e.g., search history or location history), and 3/ inferred data (inferences that the platform makes about you based on #1 and #2 -- e.g., Nick likes ice skating). Generally speaking, I believe that most type #1 and type #2 data should be portable, but most type #3 probably should not.
To add to the complication is the question of when "your" data also includes data from other people -- for example, messages someone else sent me, photos where I was tagged, contact lists, etc. This was at the heart of the Cambridge Analytica scandal, where individual users exporting their own data to a third-party app actually exposed the data of many more people, unwittingly.
I'd like to focus here on the second category of complications -- how to deal with data from other people, and privacy more generally, when thinking about portability. This is a real issue that deserves a real solution.
I don't have a full answer, but I have a few ideas, which are the following:
First, expectations matter. When you send me an email, you are trusting me (the recipient) to protect that email, and not publish it, or upload it to another app that does sketchy things with it. You don't really care (or even know) whether I read my email in Gmail or in Apple Mail, and you don't generally think about those companies' impact on your privacy expectations. Whereas, when you publish into a social web platform, you are trusting both the end recipient of your content, as well as the platform itself. As an example, if you send me messages on Snapchat, you expect that they will be private to me and will disappear after a certain amount of time. So if I "ported" those messages to some other app, where, say, they were all public and permanent, it would feel like a violation - both by me the recipient and by Snap the platform. Interoperability / portability would change that expectation, since the social platform would no longer have end-to-end control (more like email). User expectations would need to be reset, and new norms established. This would take work, and time.
Second, porting the "privacy context": Given platform expectations described above, users have a sense of what privacy context they are publishing into. A tweet, a message to a private group, a direct message, a snap message, all have different privacy contexts, managed by the platform. Could this context be "ported" too? I could imagine a "privacy manifest" that ships alongside any ported data, like this:
# privacy.json
{
"content": "e9db5cf8349b1166e96a742e198a0dd1", // hash of content
"author": "c6947e2f6fbffadce924f7edfc1b112d", // hash of author
"viewers": ["07dadd323e2bec8ee7b9bce9c8f7d732"], // hashes of recipients
"TTL": "10" // expiry time for content
}In this model, we could have a flexible set of privacy rules that could even conceivably include specific users who could and could not see certain data, and for how long. This would likely require the development of some sort of federated or shared identity standards for recognizing users across platforms & networks. Note: this is a bit how selective disclosure works with "viewing keys" in Zcash. TrustLayers also works like this.
Third, liability transfer: Assuming the two above concepts, we would likely want a liability regime where the sending/porting company is released from liability and the receiving company/app assumes liability (all, of course, based on an initial authorization from a user). This seems particularly important, and is related to the idea of expectations and norms. If data is passed from Company A to Company B at the direction of User C, Company A is only going to feel comfortable with the transfer if they know they won't be held liable for the actions of Company B. And this is only possible if Company B is held accountable for respecting the privacy context as expressed through the privacy manifest. This is somewhat similar to the concept of "data controller" and "data processor" in GDPR, but recognizing that a "handoff" at the direction of the user breaks the liability linkage.
Those are some thoughts! Difficult stuff, but I think it will be solvable ultimately. If you want more, check out Cory Doctorow's in-depth look at this topic.
Last week, the Blockstack team formally rolled out their proposal for a new mining mechanism for the Stacks blockchain called Proof of Transfer (PoX). In addition to the blog post, you can read the full PoX white paper and the Stacks Improvement Proposal (SIP-007) that details the idea.
PoX is a way of building new blockchains on top of existing Proof-of-Work blockchains like Bitcoin. The Stacks blockchain has always been built on top of Bitcoin, but has thus far used a proof-of-burn (PoB) mining mechanism, which, while benefitting from Bitcoin's security, requires burning BTC. Whereas PoX requires a transfer of BTC rather than a burn. This has the added benefit of creating a mining incentive pool denominated in Bitcoin.
At a higher level, one of the coolest aspects of cryptonetwork and blockchain technology is composability -- the idea that crypto assets and protocols can be freely interconnected in almost any way imaginable, without barriers or permission. Every (public) blockchain, asset, and smart contract is a de-facto API that can be hooked into, built upon, and extended.
This may seem like a minor feature, but I believe this is a breakthrough characteristic. Today, we are seeing this play out most vividly in the DeFi space, where protocols like Maker, Compound and Uniswap interconnect to build new financial products. What Blockstack is doing with PoX brings this approach further to the Web3 / data space. Ultimately, I believe that this approach will enable a broad explosion of not only tech infrastructure but new experience & features, both for consumers and businesses. Zombies eating Kitties is just the tip of the iceberg.
It feels like consumer development in Web3 is moving slowly, and by the user numbers it is. But composable innovation is compounding, and the work that's going on right now is creating the tools & patterns for what will certainly be huge, exponential leaps in functionality and experience over time.
Last year around this time, I had a major medical scare which shook me pretty hard. The details don't matter, but the takeaway was that afterwards I felt lucky to have not had a more serious problem, despite a bad situation that was totally avoidable. I dodged a bullet. It was a wake-up call.
Last week, I was in the Netherlands, and as always, was enraptured by the water. The water is, of course, a major threat to the Netherlands and has been for centuries, so as a result the Dutch have become known for their water engineering prowess and forethought. Thomas sent me this article on 21st century Dutch water management with regard to climate change, which details the Dutch approach to water management. This line stood out:
"During Gustav, the level was all the way up to here," Van Ledden says, placing his hand just below the top of the wall. "And Gustav was just a friendly wake-up call. In 50 years, if the sea level goes up 1 or 1½ feet, the level for that storm would be here," he says, holding his hand well above the top of the flood wall. To make sure that doesn't happen, the Corps is planning to build a giant storm-surge barrier between Lake Borgne and the Gulf Intracoastal Waterway.
A "friendly wake-up call" is something that's scary enough to set you straight, but not bad enough to do real damage. It is and incredibly useful thing. Hopefully it should never come to that, but I find that it's human nature to push things to their natural limits until some sort of wake-up call inspires a correction.
Earlier this week, I spoke at a Justice Department / Stanford conference about antitrust issues in the tech sector. Our panel included Patricia Nakache from Trinity Ventures, Ben Thompson from Stratechery and Mark Lemley from Stanford. If you are interested you can watch the whole thing here:
https://twitter.com/trinityventures/status/1228085073435484161
The main point I tried to make was that cultivating the development of blockchain and cryptonetworks is actually a critical strategy here. Regular readers will know that I don't shut up about this, and I held to that on the panel. This point is painfully absent in most conversations about market power, competition and antitrust in the tech sector, and I will always try and insert that into the conversation.
To me, blockchains & crypto are the best "offense" when it comes to competition in the tech sector. Historically, breakthroughs in tech competition have included an offense component in addition to a defense component (note that the below only focuses on computing, not on telecom):

Credit: Placeholder / USV
The "defense" side has typically included a break up (US vs. AT&T) or some kind of forced openness. Examples of forced openness include the Hush-a-phone and Carterfone decisions which forced openness upon AT&T. Several decades later were the (ongoing) battles over Net Neutrality with the ISPs. The discussion about data portability and interoperability brings the same questions to the applications / data layer.
Data portability & interoperability are important for two reasons: 1/ because they focus on a major source of market power in the tech sector, which is control of data ("break up the data, not the companies"), and 2/ because they represent a category of regulatory interventions that are just as easy for small companies to implement as large ones, unlike heavy approaches like GDPR that are easy for big companies to implement but hard on startups.
That said, when you dig into the issue of data portability, there are some hard problems to solve. I don't believe they are insurmountable, but I also believe they haven't been resolved as of yet.
For context, data portability is the idea that a user of a tech service (e.g., Google, Facebook, Twitter, etc) should be able to easily take their data with them and move it to a competing service, if they so choose. This is similar to how you can port your phone number from one carrier to another, or how in the UK you can port your banking data from one institution to another. Both of these examples required legislative intervention, with an eye towards increasing competition. Also, most privacy regimes (e.g., GDPR in Europe and CCPA in California) have some language around data portability.
Where it gets more complicated is when you start considering what data should be portable, and whose data.
For example, within tech companies there are generally three kinds of data: 1/ user-submitted data (e.g., photos, messages that you post), 2/ observed data (e.g., search history or location history), and 3/ inferred data (inferences that the platform makes about you based on #1 and #2 -- e.g., Nick likes ice skating). Generally speaking, I believe that most type #1 and type #2 data should be portable, but most type #3 probably should not.
To add to the complication is the question of when "your" data also includes data from other people -- for example, messages someone else sent me, photos where I was tagged, contact lists, etc. This was at the heart of the Cambridge Analytica scandal, where individual users exporting their own data to a third-party app actually exposed the data of many more people, unwittingly.
I'd like to focus here on the second category of complications -- how to deal with data from other people, and privacy more generally, when thinking about portability. This is a real issue that deserves a real solution.
I don't have a full answer, but I have a few ideas, which are the following:
First, expectations matter. When you send me an email, you are trusting me (the recipient) to protect that email, and not publish it, or upload it to another app that does sketchy things with it. You don't really care (or even know) whether I read my email in Gmail or in Apple Mail, and you don't generally think about those companies' impact on your privacy expectations. Whereas, when you publish into a social web platform, you are trusting both the end recipient of your content, as well as the platform itself. As an example, if you send me messages on Snapchat, you expect that they will be private to me and will disappear after a certain amount of time. So if I "ported" those messages to some other app, where, say, they were all public and permanent, it would feel like a violation - both by me the recipient and by Snap the platform. Interoperability / portability would change that expectation, since the social platform would no longer have end-to-end control (more like email). User expectations would need to be reset, and new norms established. This would take work, and time.
Second, porting the "privacy context": Given platform expectations described above, users have a sense of what privacy context they are publishing into. A tweet, a message to a private group, a direct message, a snap message, all have different privacy contexts, managed by the platform. Could this context be "ported" too? I could imagine a "privacy manifest" that ships alongside any ported data, like this:
# privacy.json
{
"content": "e9db5cf8349b1166e96a742e198a0dd1", // hash of content
"author": "c6947e2f6fbffadce924f7edfc1b112d", // hash of author
"viewers": ["07dadd323e2bec8ee7b9bce9c8f7d732"], // hashes of recipients
"TTL": "10" // expiry time for content
}In this model, we could have a flexible set of privacy rules that could even conceivably include specific users who could and could not see certain data, and for how long. This would likely require the development of some sort of federated or shared identity standards for recognizing users across platforms & networks. Note: this is a bit how selective disclosure works with "viewing keys" in Zcash. TrustLayers also works like this.
Third, liability transfer: Assuming the two above concepts, we would likely want a liability regime where the sending/porting company is released from liability and the receiving company/app assumes liability (all, of course, based on an initial authorization from a user). This seems particularly important, and is related to the idea of expectations and norms. If data is passed from Company A to Company B at the direction of User C, Company A is only going to feel comfortable with the transfer if they know they won't be held liable for the actions of Company B. And this is only possible if Company B is held accountable for respecting the privacy context as expressed through the privacy manifest. This is somewhat similar to the concept of "data controller" and "data processor" in GDPR, but recognizing that a "handoff" at the direction of the user breaks the liability linkage.
Those are some thoughts! Difficult stuff, but I think it will be solvable ultimately. If you want more, check out Cory Doctorow's in-depth look at this topic.
Last week, the Blockstack team formally rolled out their proposal for a new mining mechanism for the Stacks blockchain called Proof of Transfer (PoX). In addition to the blog post, you can read the full PoX white paper and the Stacks Improvement Proposal (SIP-007) that details the idea.
PoX is a way of building new blockchains on top of existing Proof-of-Work blockchains like Bitcoin. The Stacks blockchain has always been built on top of Bitcoin, but has thus far used a proof-of-burn (PoB) mining mechanism, which, while benefitting from Bitcoin's security, requires burning BTC. Whereas PoX requires a transfer of BTC rather than a burn. This has the added benefit of creating a mining incentive pool denominated in Bitcoin.
At a higher level, one of the coolest aspects of cryptonetwork and blockchain technology is composability -- the idea that crypto assets and protocols can be freely interconnected in almost any way imaginable, without barriers or permission. Every (public) blockchain, asset, and smart contract is a de-facto API that can be hooked into, built upon, and extended.
This may seem like a minor feature, but I believe this is a breakthrough characteristic. Today, we are seeing this play out most vividly in the DeFi space, where protocols like Maker, Compound and Uniswap interconnect to build new financial products. What Blockstack is doing with PoX brings this approach further to the Web3 / data space. Ultimately, I believe that this approach will enable a broad explosion of not only tech infrastructure but new experience & features, both for consumers and businesses. Zombies eating Kitties is just the tip of the iceberg.
It feels like consumer development in Web3 is moving slowly, and by the user numbers it is. But composable innovation is compounding, and the work that's going on right now is creating the tools & patterns for what will certainly be huge, exponential leaps in functionality and experience over time.
Last year around this time, I had a major medical scare which shook me pretty hard. The details don't matter, but the takeaway was that afterwards I felt lucky to have not had a more serious problem, despite a bad situation that was totally avoidable. I dodged a bullet. It was a wake-up call.
Last week, I was in the Netherlands, and as always, was enraptured by the water. The water is, of course, a major threat to the Netherlands and has been for centuries, so as a result the Dutch have become known for their water engineering prowess and forethought. Thomas sent me this article on 21st century Dutch water management with regard to climate change, which details the Dutch approach to water management. This line stood out:
"During Gustav, the level was all the way up to here," Van Ledden says, placing his hand just below the top of the wall. "And Gustav was just a friendly wake-up call. In 50 years, if the sea level goes up 1 or 1½ feet, the level for that storm would be here," he says, holding his hand well above the top of the flood wall. To make sure that doesn't happen, the Corps is planning to build a giant storm-surge barrier between Lake Borgne and the Gulf Intracoastal Waterway.
A "friendly wake-up call" is something that's scary enough to set you straight, but not bad enough to do real damage. It is and incredibly useful thing. Hopefully it should never come to that, but I find that it's human nature to push things to their natural limits until some sort of wake-up call inspires a correction.
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog
Share Dialog