Dick Pics and Cable Company Fuckery

Apr 7, 2015

John Oliver has become the most important voice in tech policy (and maybe policy in general).

His gift, his talent, his skill: turning wonky policy language that makes people glaze over into messages that people connect to and care about it.

Last fall, he did took what may be the most boring, confusing term ever, Net Neutrality, and made it relatable as Cable Company Fuckery. 8mm people watched that video, and it was a big factor behind the over 4mm comments left at the FCC on an issue that even most tech people had a hard time explaining to each other.

Now, he has tackled another mind bending, but really very important topic: surveillance. It’s amazing really. Huge, complicated, important issue. Real-life spy stories, with real life hero/villains. And no one gives a shit at all. But when you say it the right way — in this case: should the government be able to see your dick pic? — people light up.

This is 30 minutes of truly instructive brilliance:

The best part – he hands Snowden a folder labeled top secret including a 8×10 photo of his own penis. And asks Snowden to re-explain every NSA spy program in terms of “the dick pic test”.

On the one hand, you could argue that it’s sad that policy issues need to get boiled down to “dick pics” and “fuckery” for people to get them.

On the other hand, it’s even sadder that the people investing time, energy, and effort in working on these issues (myself included) don’t grasp that and use it to make sure ideas connect. Thankfully we have John Oliver to help us with that.

This piece is brilliant — in particular the way he opens Snowden’s eyes to the extent to which people don’t get this issue, misunderstand who he is and what he did, and need it to be presented to them in a different, simpler way.

The major point here is that no matter your feelings on what Snowden did, it’s all for naught if it doesn’t trigger an actual conversation. And while it’s easy for folks in the tech / policy community to feel like that conversation is happening, the truth is that on a broad popular level it’s not.

So once again John Oliver has shown us how to take a super important, super complicated, and basically ignored issue and put it on the table in a way people can chew on. Bravo.

From here on out, I’m going to start looking at every policy issue through the lens of WWJD — what would john oliver do — and pick it up from the vegetable garden of policy talk and into the headspace of people on the street.

Failure is the tuition you pay for success

Apr 1, 2015

I couldn’t sleep last night, and was up around 4am lurking on Twitter. I came across an old friend, Elizabeth Green, who is an accomplished and awesome education writer — you’ve probably read some of her recent NYT mag cover stories, and it turns out she has a new book out, Building a Better Teacher. I know Elizabeth because back in 2008 at OpenPlans, we worked with her to launch GothamSchools, which eventually spun-out and became Chalkbeat.

I said to myself: oh yeah, that was such a great project; I had totally forgotten about that. So awesome that it is still up and running and thriving. And I dutifully headed over to update my Linkedin profile and add it to the section about my time at OpenPlans.

During my nearly 6 years at OpenPlans, we built a lot of great things and accomplished a lot, and I’m really proud of my time there. But it’s also true that we made a ton of mistakes and invested time, money and energy in many projects that ranged from mild disappointment to total clusterfuck.

Looking at my LinkedIn profile, I started to feel bad that I was only listing the projects that worked – the ones that I’m proud of. And that’s kind of lame. The ones that didn’t work were equally important — perhaps more so, for all the hard lessons I learned through doing them and failing. So rather than be ashamed of them (the natural and powerful response), I should try and celebrate them.

So I decided to add a new section to my LinkedIn profile — right under my work history: Self.Anti-Portfolio. Projects that didn’t work. I started with things we did at OpenPlans, but have since added to it beyond that. Here’s the list so far:

  • OpenCore (2005-8) – a platform for organizing/activism. Hugely complex, too much engineering, not enough product/customer focus, trying to be a web service and an open source project at the same time and basically failing at both. (now http://coactivate.org)
  • Homefry (2008) – platform for short-term apartment sharing. Seemed like such a great idea. A few friends and I built a half-functional prototype, but didn’t see it through. Maybe a billion dollar mistake. (more here).
  • Community Almanac (2009) – platform for sharing stories about local places. Really beautiful, but no one used it (http://communityalmanac.org)
  • OpenBlock (2010) – open source fork of everyblock.com, intended for use by traditional news organizations. Stack was too complicated, and in retrospect it would have been smarter to simply build new, similar tools, rather than directly keep alive that codebase (https://github.com/openplans/openblock)
  • Civic Commons Marketplace (2011) – a directory/marketplace of open source apps in use by government. Way overbuilt and never got traction. Burned the whole budget on data model architecture and engineering.
  • Distributed (2014) – crowd funding for tech policy projects. Worked OK, but we discontinued it after brief private pilot.

Looking through this list — and there are certainly ones I’ve forgotten, and I will keep adding; trust me — what I noticed was: in pretty much every one of these cases, the root cause was Big Design Up Front – too much engineering/building, and not enough customer development. Too much build, not enough hustle. Another observation is that these were mostly all slow, drawn-out, painful failures, not “fast” failures.

I thought I learned these lessons way back in 2006! That was when I first read Getting Real, which became my bible (pre-The Lean Startup) for running product teams and building an organization. The ideas in Getting Real were the ones that helped make Streetsblog and Streetfilms such a big success. And they are what helped me understand what was going wrong with the OpenCore project, and ultimately led me to disassemble it and start what became OpenPlans Labs.

But it turns out the hard lessons can lurk, no matter how much you think you’ve taken them to heart. Perhaps tracking the Anti-Portfolio in public will help.

Financial Planning for the 90%

Mar 30, 2015

A few weeks ago as I was walking down Beacon Street in Brookline, I happened upon something amazing: The Society of Grownups.

The Society of Grownups is a self-proclaimed “grad school for adulthood”, the idea is to give people the tools they need to manage their grown up lives. The primary focus is on financial literacy and counseling, but it also includes other kinds of classes and programs.

This is something I’ve wanted for a long time. I am dumbfounded that we don’t have more financial / grownup education early in our lives. I graduated high school without as much as a word about earning / saving money, what credit cards mean, etc. I suppose, like sex ed, financial ed is one of those subjects that people are just supposed to figure out on their own, or maybe learn from their parents. It’s just that it’s so important — if you think about it it is preposterous that this is not more of a focus at all levels of learning.

Of course, there is no shortage of financial services for people who are well off — and I’d argue that the prevailing mindset is that you need to have money to talk to someone about money. Which makes sense, in a way, but is also fundamentally wrong, and a contributing factor to why it’s expensive to be poor.

Point is, I’ve been hoping to see services like this crop up. Not only is it an important social issue, but I suspect it can be a really good business in its own right.

The Society of Grownups is one attempt — at the moment, it’s not attempting to be a web-scale effort, but rather is small and personal. In-person coaching, classes, and community. Ranging from $20 for a 20 minute session with a financial coach, to $100 for a 90 minute session, to a range of pricing for classes and events.

I signed up for a 20 minute financial coaching session (first one is free), just to get a feel for it. My coach came in with a big “Don’t Panic” sticker on her notebook — this is one of their slogans. We talked through our situation, concerns and goals. It was really helpful and refreshing. I wish I had done this 15 years ago when I was in college (and every month since).

Another player in this space that I’ve been curious about it LearnVest — they are going with the web-based approach; the yin to TSOG’s yang. I got a little stuck in the LearnVest onboarding — there’s nothing wrong with it, but it’s just the standard email back-and-forth plus phone calls. There is something nice about just being able to walk into a place and talk face-to-face. But I suspect that I’ll like LearnVest as well. They do direct integration with your bank accounts (a la mint), and use the coaching to help you come up with a strategy and a plan.

Anyway, this is all very encouraging, and I hope both of these efforts and others can get traction. So much of the country, and the world, is so fucked and adrift in terms of money. And while there are clearly macro forces at play causing much of that, there’s also the potential for everyone to get smarter and better about how they manage on a month-to-month basis, and I hope we see more and more companies finding a business model that serves them.

The Light Inside, The Fire Inside

Mar 18, 2015

Last week, a friend passed away after a relatively brief but intense battle with lung cancer. I didn’t know Paul well, but he was very close with a few of my very close friends, and I had spent enough time with him to understand that he was special: he had a light inside of him. A curiosity and energy that opened his eyes, lit up his mind, and made him go. I could feel it the first time I met him.

Later last week, I was traveling for work, first to the Bay Area and then to Austin. In both cases, I spent time with old, great friends who I don’t get to see very often. And in both cases, I stayed out way later than I should and usually do, and drank way more than I should or usually do. But as with most things like that there is good along with the bad.

Not that surprisingly, I guess, I ended up having almost the same conversation in both places. And that conversation was about how wonderful it is to find people in your life who have that light inside of them. There are so many nasty people in the world, and being a kid (or a grownup, for that matter) can be hard, and compoundingly hard the fewer sources of light you have in your life. You don’t always find people with that light — right away, or sometimes at all. So when you do, it is so fantastic.

Thinking about Paul, and thinking about my friends, it made me so thankful for the people in my life who are generous, curious, thoughtful, creative, and full of light. It has taken a long time to find them. And it’s also so easy to take them for granted if you’re not careful.

It makes me think about how we find people who have this (specifically, on the internet), and also how we find and cultivate our own center of light, often in the face of doubt and difficulty.

Coincidentally, on the flight home, I started reading James Altschuler‘s book, Choose Yourself. One idea he stresses is that the key to happiness (and success) is lighting and stoking the fire inside you, and letting everything else flow from that. And that this can be hard to do, since there are plenty of ways for us to doubt ourselves, get distracted, and lose faith that our fire is real and valuable.

He calls it a fire, not a light, but it’s the same idea. It’s getting over whatever bullshit may be in your way (approval of others, fitting in, etc), and focusing on the energy, the fire, at your core. Being honest and accepting about what that is, and letting that guide your way to more and more. It’s a simple idea, but has really stuck with me.

So, I’ll just end by thanking everyone who puts light out into the world, including many of those in my current orbit who I appreciate enormously. And by encouraging myself and everyone else to keep stoking the fire, even when it’s hard to do.

Increasing trust, safety and security using a Regulation 2.0 approach

Dec 18, 2014

This is the latest post in a series on Regulation 2.0 that I’m developing into a white paper for the Program on Municipal Innovation at the Harvard Kennedy School of Government.

Yesterday, the Boston Globe reported that an Uber driver kidnapped and raped a passenger. First, my heart go out to the passenger, her friends and her family. And second, I take this as yet another test of our fledgling ability to create scalable systems for trust, safety and security built on the web.

This example shows us that these systems are far from perfect. This is precisely the kind of worst-case scenario that anyone thinking about these trust, safety and security issues wants to prevent. As I’ve written about previously, trust, safety and security are pillars of successful and healthy web platforms:

  • Safety is putting measures into place that prevent user abuse, hold members accountable, and provide assistance when a crisis occurs.
  • Trust, a bit more nuanced in how it’s created, is creating the explicit and implicit contracts between the company, customers and employees.
  • Security protects the company, customers, and employees from breach: digital or physical all while abiding by local, national and international law.

An event like this has compromised all three. The question, then, is how to improve these systems, and then whether, over time, the level of trust, safety and security we can ultimately achieve is better than what we could do before.

The idea I’ve been presenting here is that social web platforms, dating back to eBay in the late 90s, have been in a continual process of inventing “regulatory” systems that make it possible and safe(r) to transact with strangers.

The working hypothesis is that these systems are not only scalable in a way that traditional regulatory systems aren’t — building on the “trust, then verify” model — but can actually be more effective than traditional “permission-based” licensing and permitting regimes. In other words, they trade access to the market (relatively lenient) for hyper-accountability (extremely strict). Compare that to traditional systems that don’t have access to vast and granular data, which can only rely on strict up-front vetting followed by limited, infrequent oversight. You might describe it like this:

This model has worked well in relatively low-risk for personal harm situations. If I buy something on eBay and the seller never ships, I’ll live. When we start connecting real people in the real world, things get riskier and more dangerous. There are many important questions that we as entrepreneurs, investors and regulators should consider:

  • How much risk is acceptable in an “open access / high accountability” model and then how could regulators mitigate known risks by extending and building on regulation 2.0 techniques?
  • How can we increase the “lead time” for regulators to consider these questions, and come up with novel solutions, while at the same time incentivizing startups to “raise their hand” and participate in the process, without fear of getting preemptively shut down before their ideas are validated?
  • How could regulators adopt a 2.0 approach in the face of an increasing number of new models in additional sectors (food, health, education, finance, etc)?

Here are a few ideas to address these questions:

With all of this, the key is in the information. Looking at the diagram above, “high accountability” is another way of saying “built on information”. The key tradeoff being made by web platforms and their users is access to the market in exchange for high accountability through data. One could imagine regulators taking a similar approach to startups in highly regulated sectors.

Building on this, we should think about safe harbors and incentives to register. The idea of high-information regulation only works if there is an exchange of information! So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data. Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector. Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out. But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach.

Consider where to implement targeted transparency. One of the ways web platforms are able to convince users to participate in the “open access for accountability through data” trade is that many of the outputs of this data exchange are visible. This is part of the trade. I can see my eBay seller score; Uber drivers can see their driver score; etc. A major concern that many companies and individuals have is that increased data-sharing with the government will be a one-way street; targeted transparency efforts can make that clearer.

Think about how to involve third-party stakeholders in the accountability process. For example, impact on neighbors has been one of the complaints about the growth of the home-sharing sector. Rather than make a blanket rule on the subject, how might it be possible to include these stakeholders in the data-driven accountability process? One could imagine a neighbor hotline, or a feedback system, that could incentivize good behavior and allow for meaningful third-party input.

Consider endorsing a right to an API key for participants in these ecosystems. Such a right would allow / require actors to make their reputation portable, which would increase accountability broadly. It also has implications for labor rights and organizing, as Albert describes in the above linked post. Alternatively, or in addition, we could think about real-time disclosure requirements for data with trust and safety implications, such as driver ratings. Such disclosures could be made as part of the trade for the freedom to operate.

Related, consider ways to use encryption and aggregate data for analysis to avoid some of the privacy issues inherent in this approach. While users trust web platforms with very specific data about their activities, how that data is shared with the government is not typically part of that agreement, and this needs to be handled carefully. For example, even though Apple knows how fast I’m driving at any time, we would be surprised and upset if they reported us to the authorities for speeding. Of course, this is completely different for emergent safety situations, such as the Uber example above, where platforms cooperate regularly and swiftly with law enforcement.

While it is not clear that any of these techniques would have prevented this incident, or that it might have been possible to prevent this at all, my idealistic viewpoint is that by working to collaborate on policy responses to the risks and opportunities inherent in all of these new systems, we can build stronger, safer and more scalable approaches.

// thanks to Brittany Laughlin and Aaron Wright for their input on this post