July, 2011

Jul 11

Screens, Storage & Networks

I’ve been thinking a bunch about platforms lately, and how they’re evolving very very quickly. Generally, there are two categories of thing that people talk about as platforms. Traditionally, they’ve been computer operating systems: Windows, OS X & Linux, now iOS & Android. Lately people are talking about cloud platforms: services like EC2, but also web services with APIs that other apps are built to integrate with.

But more and more, that’s not the way I’m thinking about my own systems; as devices proliferate at my own home, and as I tend to use tiny connected computers in more numerous and varied contexts.

I’ve been interested in what I call “4 screen & a cloud” products for a while: products that help us unify and take advantage of our laptop + phone + tablet + tv — but it all became a little clearer to me a few weeks when a wave of devices entered the house all at the same time. In the space of a few weeks, I upgraded to an iPad2, got a Samsung Tab to experiment with Android Tablets, got an Android phone in addition to my iPhone, and got a WebOS phone from the D9 conference. So we had all those devices in the house, plus our iMac, Kathy’s set of devices, and my mom’s as well, since she was visiting. Oh, and 3 Kindles between the three of us. Screens were everywhere.

Now, I’m the first to recognize that we’re somewhat atypical in our technology consumption in normal times; add to that the devices that I’ve picked up lately because of work and my house is a jumble of operating systems, devices and power adapters. Exciting!

When you get that many screens and devices, what happens is interesting: when you want to do something, communicate with someone, remember something, schedule an appointment, read a book, or whatever, you just pick up whatever screen is nearest to you and work from that.

Well, you do that if you can. Because in our current platform chaos, not all devices are fungible, not all activities are available from all platforms.

So that got me thinking some about what I need, and where, and in what contexts and on what devices, and now I think about platforms this way: I have a set of screens, a set of stuff, and a set of people that I want to do things with — and I want those sets available to me wherever & whenever I am.

By screens, I mean something more than just pixels: I really mean input & output systems, of which screens are the most visible parts; really it should probably be screens, sensors & speakers. In other words, it’s the displays of each system, the audio systems, and the ways that we indicate intent, be it typing, swiping, speaking, remote-button-puching, smiling, waving, running, or just being.

By storage, I mean something more than just bits: while Dropbox and iCloud and Clouddrive are important, I want to do more than just store and share my files with others. It’s about more than having a place to put my music. It’s about having the context of my life: my apps, my reading material, my history of shopping & interest intent. It’s really the things I’m creating, consuming, sharing, saving, working on and just thinking about. One of the things that’s probably non-obvious about this formulation is that for this to work, the storage is going to be pretty keyed to my identity. Without knowing something about who I am, it won’t work.

And by network, I mean something more than just my Facebook graph: what’s becoming clear is that we’ve all got many and diverse groupings in our lives, ranging from the very intimate groups of a nuclear family to the wide-ranging groupings of Twitter followers. The short version, though, is that it’s becoming increasingly clear that, just like in the offline world, people online want to do things with each other. Shocking, I know.

That’s the definition of platform that’s relevant to me: a combination of screens, storage and networks that help me do my work and live my life. The companies that see that true platforms transcend any one particular technology stack will be the ones that prosper — you can already see some interesting ones emerge.

As a side note, I think screens, storage & networks is one way to look at the landscape of the giants competing: it’s where Apple, Google, Facebook & Amazon are slugging it out (and to some extent it’s the evolution now of my old stomping ground, Mozilla). I would argue that each of the giants has a super strong position in 1 or 2 of the three areas, but none has a lock on all three, and most of the interesting initiatives of each are about strengthening the places where they’re historically weak.

Apple is obviously terrific at screens, okay at storage, and not very good at networks.

Google’s now strong at screens (although probably not as strong as Apple) and could be great at storage, and finally has a credible start on networks.

Facebook is incredibly strong at networks, has some weakness in screens, and is pretty good with storage (at least for things like photos).

And Amazon is very strong on storage, weak at networks, and weak (at the moment) on screens.

I’d argue that their relative strengths and weaknesses are  important for startups to understand as well, as that gives you a bit of a map of one set of opportunities.

Anyway, that’s how I’m thinking about things lately. What do you think?

Jul 11


Yesterday my Twitter follower count ticked over 50,000 for the first time. And while I wouldn’t exactly call that a lifetime achievement or milestone, it has caused me to reflect a little bit on Twitter specifically and the Internet more generally, so I thought I would write down some of those thoughts here.

Off the top, let me say this: I really love Twitter. A lot. I use it every day — I don’t always post things (although most times I do), but I always read and discover new things — it’s become integral to me in a bunch of ways. I share interesting articles about technology and startups and politics and literature that I find. I link to my blog posts like this one. I ask questions, mostly about travel and technology. I vent about things (I’m looking at you @unitedairlines). I talk about TV and music that I like. I track a bunch of my friends and coworkers and how they’re doing. And I make a lot of dumb jokes.

What’s clear at this point is that I’m not a particularly typical Twitter user. As services evolve, they find their main use cases, their reasons for existing. You’ve got Facebook for interacting with friends in symmetric ways; you’ve got Quora for getting high quality answers to questions; you’ve got Tumblr for expressing a synthesis of media that in aggregate represents you.

Twitter has evolved, I think, into essentially a celebrity broadcast medium. Now, I’m using the term ‘celebrity’ a little broadly — there are the Biebers and Gagas, of course, but there are also the CNNs and NPRs of news, and the Saccas of the tech world, and the long middle part of the curve of bands and critics and pundits that have tens or hundreds of thousands of followers. It seems obvious to me at this point that this is really what Twitter is for: tracking our mega and mini broadcasters, being able to follow along in real time to see what they’re doing, writing and what they’re amplifying from others.

That’s part of how I use it, but I think that my use case is somewhat more complicated, which makes my tweets pretty atypical. My tweet stream is more like a mix of broadcasting, retweets, active conversations with friends, debates with other techies, and a bunch of snarky jokes.

I think there are a few reasons for this.

First, because I’m more of a “Twitter native” — that is, someone who’s been active on the system since the first million users, I’ve been part of the ‘figuring out’ conversations that have happened, mostly as a user. So I’ve gone through several generations of the product before it landed on celebrity broadcast as the center, and some of those generations of use case have really stuck with me.

Second, I developed a bunch of my patterns while I worked at Mozilla, a uniquely open organization where Twitter really fit. Because we don’t have a ton of internal systems for closed communications by design, we like to have conversations in the open, on public wikis, on open IRC channels, and on Twitter. And because I had management responsibility of a distributed, global organization, it helped me to kind of keep track of folks I wasn’t able to see every day. Beyond that, it let me have some interactions in a public way with people that I could model so that others would see them and (maybe) learn from them. In a lot of ways, I think of it as the modern equivalent of Managing by Walking Around, popularized by Hewlett-Packard long ago. It’s easy to brush off this use case as not real, but I really did use it a lot for helping to manage at Mozilla.

And while Mozilla is obviously unique in its openness, in a lot of ways the Silicon Valley ecosystem shares some of the characteristics, with lots of actors who are decentralized and distributed, working in different ways but able to share public communication channels like this.

The third reason I’m quirky in my use, I think, is that I make so many jokes on it. I’ve always been a guy that’s most comfortable at the back of the classroom making jokes. It’s not necessarily the part of my personality I’m most proud of, but it’s what I do. I’m happiest in the back, scribbling semi-related ideas to what’s going on, making jokes to myself or friends. Twitter gives me a pretty good way to do that sort of thing without being disruptive, and it’s fun for me.

I guess last is the fact that a lot of close friends also spend a fair amount of time on it, so keeping up with them and interacting with them there is fun and rewarding.

As I’ve moved up to 50k followers and past, I think it’s going to start changing how I use it a bit, for better or worse. It’s becoming somewhat more of a broadcast/audience thing and less of a group-of-friends thing. It remains extremely useful and integral to me, but probably will be so in different ways.

Anyway, enough for now — just thought I’d capture a few thoughts here that wouldn’t fit in 140 characters. πŸ™‚

Jul 11

Announcing Citrus Lane

I’ve been at Greylock 6 months now, and have a bunch of learnings and observations I’ll start sharing on the blog soon!

But in this post I’m really happy to share that the first investment that I’ve led is in Citrus Lane — an investment we made a few months ago.

Citrus Lane is a modern subscription e-commerce company that’s focused on getting awesome, healthy, useful & delightful products to young families. They launched their site and products last week. πŸ™‚

I’m really, really excited to be involved with the company.

I’m exited about the category: subscription commerce is coming like a freight train; we’re in a period where the way we buy products and services is changing dramatically and quickly. The comprehensiveness of Amazon’s offerings and the ubiquity of the modern logistics chain have paved the way for more thoughtful, curated, unique offerings to consumers, highly targeted by interest, lifestyle and personal tastes.

I’m excited about the particular sector: as a family with a kindergartner, I’m acutely aware of how you go from month to month never knowing whether you should be doing better taking care of your children, thinking there must be better ways to do things and better products. It’s obvious to me that we’ve made product and process decisions that will last for years. And it’s super obvious that young parents, and especially moms, control trillions of dollars of product decisions.

And I’m particularly excited to work with the CEO & Founder, Mauria Finley. I’ve known Mauria for more than 15 years — she’s a Stanford-trained Computer Scientist with a particular expertise in product design, and has held product leadership roles at Ebay, PayPal, Good, AOL and elsewhere. She’s fantastic, and a highly motivated first time CEO. She’s been great to work with so far and I think will continue to be tops.

She’s putting together a very interesting team that includes Claire Hough, her co-founder & CTO — previously of NexTag, Blue Martini, Netscape, Napster and more.

So they’re launched! Go take a look and see what you think. Watch this space (and follow them on Twitter!). πŸ™‚

[PS — I’ve done several other investments, but they’ve been from our Discovery Fund, which we typically don’t announce publicly unless the companies really want us to. This is somewhat different in that I’m on the board of directors and it’s a more significant level of investment.]

Jul 11

Leviathan Wakes, by James S.A. Corey

I like reading a good space opera every once in a while, and really enjoyed this one. Or rather, it’s a space opera in content and themes, but structured more as a noir + horror novel. Good mix of gumshoe and sci-fi.

This is sort of a tweener in science fiction — not an immediate future type of book, and not a far future book like the Ian Banks books. It’s set in our solar system after we’ve colonized Mars and the asteroid belt — so the big political entities are the Earth and Mars, with the asteroid belt as a sort of frontier land.

Fun book, good pacing, entertaining. First of a trilogy.

Jul 11


On Monday we got back to the house after our 4th of July festivities to see this scene:


It’s a sofa! In front of our house. And not one of those hip, polyurethane outdoor couches from Design Within Reach, but more like a ratty, heavily used, might-be-something-living-in-it type of couch.

As you might imagine, we were a little surprised by it, and somewhat mystified by its presence. (And a little stumped on how to, you know, get rid of it.)

Anyway, imagine our surprise the next day, when Kathy looked out the window and saw this scene:


There’s a guy sitting on the couch! Nice! And, again, not totally what we were expecting.

So we puzzled over that a while. With a little sleuthing, and help from my Twitter friends, it’s now obvious what’s happened: we’ve been visited by the Couch Fairy.

It’s a story as old as time, and we all know how it goes: an always working, ambitious person (the protagonist) returns home from a trip to find a sofa on their lawn that they weren’t expecting. After a little while, they discover a grouchy, impish old man lives on the couch, although he comes and goes at some unpredictable intervals. When asked questions directly, the old man (who’s really The Couch Fairy, as we all know) never answers about who he is or what he’s there for, or, indeed, where the couch came from. All attempts to remove the couch — whether by having people take it away, chopping it up into little pieces, setting it on fire, whatever — ultimately fail, as the couch keeps returning over and over.

After weeks or months of this type of activity, the protagonist ultimately accepts that you can’t always take care of things in a direct, get-it-done! type of manner, but sometimes have to accept that things are how they are — so he or she takes a break, sits on the couch and just slows down to watch the world go by with the Couch Fairy for an afternoon. And they realize that they like this newfound slower pace of life.

You know how it ends — the next morning they come out to sit on the couch again and watch the world go by with the Fairy, but both the couch and the Fairy are gone, nowhere to be found. But the main character has learned to slow down a little bit, enjoy the world around them.

So that’s clearly what we’ve got going on here. Nothing to do but slow down, accept the situation and roll with it.

Like all archetypal stories, this is one that’s been told & retold dozens of times. The canonical first known telling is, of course, Hans Christen Andersen’s The Couch Fairy (Das Sofa Elf), which is the one we’re all familiar with from our childhoods.

Then there was Kafka’s The Sofa, and the absurdist Austrian play from the 20s “There is no couch.” (Es gibt keine Couch.)

And of course there have been many movies, including:

  • 1976:

    The Couch Fairy, starring Dustin Hoffman as the protagonist and Walter Mattheau as the Couch Fairy

  • 1988: What’s up with the sofa?, starring a goofball Robin Williams as the Imp and Sean Penn inexplicably cast as the harried main character

  • 2002: Couch This!, a slightly angrier version with Robert De Niro in the role, and Jason Bateman in the main role

There was always a plan to do a more modern version directed by Terry Gilliam, but it’s been plagued by a series of disasters. First it was set to star Will Farrell, then Will Arnett, and now Jimmy Fallon is attached to it, which I think means obviously it’ll be straight to video, no theatrical release.

And don’t forget David Foster Wallace’s take: “Who will speak for the Ottoman?” featured in the New Yorker in 2006. It’s a long essay, and pretty esoteric in my opinion, but really a must read if you’re interested in the canon.

And of course we’re all extremely excited for the upcoming 2012 Broadway retelling “Couched!” which features Whoopi Goldberg as the Fairy with Nathan Lane set to sing the protagonist’s role.


So that’s where we stand. Gotta wait the old guy out.

[Thanks to @shaver, @mart3ll, @humphd & others for the clues to figure things out this morning!]