Aug 11

Google & Motorola

Very very interesting news about Google buying Motorola Mobility this morning. It’s got so many implications it’s tough to take in all at once, so wanted to capture a few thoughts quickly.

First thing worth pointing out, though, is this: we don’t actually know the shape of the whole deal at this point. Will Google keep the MOTO hardware business? Keep the patents and sell the hardware side? Keep both? It’s hard to know how their internal evaluation went, and what they’ll do from here, so a lot of this is really hard to speculate about.

Having said that, a few thoughts:

– it’s another instance in a long history of software (and now Internet) business devouring the previous generation’s hardware businesses. Internet business are inherently more leveraged: distribution power trumps almost everything else, especially in a phase where the technology portion is maturing.

– along those lines, it’s interesting to think about what happens next for Samsung, RIM, HTC, Nokia, but I’m way more interested in what the software players do. All eyes in that regard are on Microsoft, but I think the more interesting long term questions are for Facebook and Amazon.

– 2 things it’s clear that Google didn’t buy MOTO for: its margins or its ~20k employees.

– seems like Google definitely wanted the IP portfolio.

– and it seems to me that, assuming they keep the hardware business, that they want Motorola because it gives Google full control over the hardware and software stack, which is the only way that they’ll ever be able to even approach the excellent UX fit & finish of the Apple offerings. I feel like that’s one of the top drivers, and maybe the most important one over the long term.

– One other thing that this merger is decidedly not about is distribution — if anything, Google’s distribution power with respect to Android is somewhat weakened, at least in the short-to-medium term, as they’re undoubtedly going to cause some grief with partners Samsung and HTC. Feels like Google has calculated that control over getting the experience right trumps any distribution help they might get from their handset partners.

All of this lines up pretty well with my post about Screens, Storage & Networks last week — the last 60 days have seen Google push hard to get in the top tier on Screens (MOTO) and Networks (Google+).

My most esoteric point I’ve left for last, though: one of the unfortunate consequences of this development is that I think it will move perceptions of big corporations building open software (and in this particular instance, I’m specifically talking about open source software) at least a few more notches towards the cynical. The question that everyone will ask anytime a company tries an open experiment like Android in the future, the inevitable line of questioning will be: “Sure it’s open now, but for how long?” Whether premeditated or not, the path of Android has been from wide open to asserting more and more control — and this is another data point on that path. I’m not criticizing or indicting anyone for this — I think it’s essentially just a natural evolution and response to market conditions that require tighter integration. I think in a lot of ways it’s inevitable in technology networks for this to happen. (And I’ve written about it a bit before.) My only real sadness here is that it’ll move cynicism on corporate open source efforts up one more notch, and that’s not good.

Overall, though, fascinating day, fascinating time. Big moves!

Aug 11

Design like you’re right…

It’s impossible not to think a lot about data these days. We’re generating it all the time, constantly. On our phones, on our televisions, on our laptops, in public spaces. And increasingly the best startups and Internet giants are using data to make better and better product decisions and designs.

Today at Greylock we announced that DJ Patil is joining us as Data Scientist in Residence, as far as I know the first time any VC has had a position quite like that. It’s a huge addition for us, and the expression of a bunch of deeply held beliefs about the state of the art in designing great products.

But as I talk about using data for design, I find that there’s a lot of misunderstanding about it — some people have the sense that it somehow makes designers less powerful, that you’re basing decisions based purely on mechanical measures rather than designer intuition and genius.

In my view, however, data is what makes designers not only strong, but primary. It’s what turns designers from artists into the most important decision makers in a company, because it’s understanding the data that lets you understand what your users are doing, how they’re using (or not using) your products, and what you can be doing better.

It made me think back a bit to my own training as a UX designer (we called it HCI then) at Stanford in the mid-nineties, when the field was just starting to develop. We would spent a lot of time on ethnography, need finding, doing paper prototypes and then doing basic mockups and user testing. And we’d get 80% of the way there then go and build it.

Nowadays, the state of the art is to still do need finding and some mockups early, but to get to a working prototype as quickly as you can, that’s instrumented so that you can tell what’s happening and figure out whether you’re on the right track or not.

I think that’s generally the right approach, but it’s worth noting: instrumented prototypes can really only get you to local maxima — they can help you find ways to tweak and optimize the basic design you’ve got, but they can never help you find a radically different and better solution.

So when I talk about using data — and I talk about it a LOT — what I’m talking about is a mixture of the artisan/designer-led designs along with using data to figure out what’s best.

Thinking about it the other day, I was reminded of one of my favorite sayings that I learned from Bob Sutton: “Fight like you’re right, listen like you’re wrong.” Bob’s an organizational theorist, and what he means is really a paraphrase of something that I think Andy Grove said, which is that he wanted all his people to have strong beliefs, loosely held. In other words, he always wanted people to come in with a point of view — a design, as it were — but to be willing to moved off of that point of view in the face of data.

So the modern, design oriented framing is this:

“Design like you’re right. Read the data like you’re wrong.”

In other words, you should always design the product you think/believe/know is what people want — there’s a genius in that activity that no instrumentation, no data report, no analysis will ever replace.  But at the same time you should be relentless in looking at the data on how people actually use what you’ve built, and you should be looking for things that show which assumptions you’ve made are wrong, because those are the clues to what can be made better. We all like to see all the up-and-to-the-right happy MBA charts, and those are important. But they don’t help you get any better than you already are.

I wish we taught more of this blend, because all of the products we use would get better.

So: design like you’re right; listen like you’re wrong.

Jul 11


Yesterday my Twitter follower count ticked over 50,000 for the first time. And while I wouldn’t exactly call that a lifetime achievement or milestone, it has caused me to reflect a little bit on Twitter specifically and the Internet more generally, so I thought I would write down some of those thoughts here.

Off the top, let me say this: I really love Twitter. A lot. I use it every day — I don’t always post things (although most times I do), but I always read and discover new things — it’s become integral to me in a bunch of ways. I share interesting articles about technology and startups and politics and literature that I find. I link to my blog posts like this one. I ask questions, mostly about travel and technology. I vent about things (I’m looking at you @unitedairlines). I talk about TV and music that I like. I track a bunch of my friends and coworkers and how they’re doing. And I make a lot of dumb jokes.

What’s clear at this point is that I’m not a particularly typical Twitter user. As services evolve, they find their main use cases, their reasons for existing. You’ve got Facebook for interacting with friends in symmetric ways; you’ve got Quora for getting high quality answers to questions; you’ve got Tumblr for expressing a synthesis of media that in aggregate represents you.

Twitter has evolved, I think, into essentially a celebrity broadcast medium. Now, I’m using the term ‘celebrity’ a little broadly — there are the Biebers and Gagas, of course, but there are also the CNNs and NPRs of news, and the Saccas of the tech world, and the long middle part of the curve of bands and critics and pundits that have tens or hundreds of thousands of followers. It seems obvious to me at this point that this is really what Twitter is for: tracking our mega and mini broadcasters, being able to follow along in real time to see what they’re doing, writing and what they’re amplifying from others.

That’s part of how I use it, but I think that my use case is somewhat more complicated, which makes my tweets pretty atypical. My tweet stream is more like a mix of broadcasting, retweets, active conversations with friends, debates with other techies, and a bunch of snarky jokes.

I think there are a few reasons for this.

First, because I’m more of a “Twitter native” — that is, someone who’s been active on the system since the first million users, I’ve been part of the ‘figuring out’ conversations that have happened, mostly as a user. So I’ve gone through several generations of the product before it landed on celebrity broadcast as the center, and some of those generations of use case have really stuck with me.

Second, I developed a bunch of my patterns while I worked at Mozilla, a uniquely open organization where Twitter really fit. Because we don’t have a ton of internal systems for closed communications by design, we like to have conversations in the open, on public wikis, on open IRC channels, and on Twitter. And because I had management responsibility of a distributed, global organization, it helped me to kind of keep track of folks I wasn’t able to see every day. Beyond that, it let me have some interactions in a public way with people that I could model so that others would see them and (maybe) learn from them. In a lot of ways, I think of it as the modern equivalent of Managing by Walking Around, popularized by Hewlett-Packard long ago. It’s easy to brush off this use case as not real, but I really did use it a lot for helping to manage at Mozilla.

And while Mozilla is obviously unique in its openness, in a lot of ways the Silicon Valley ecosystem shares some of the characteristics, with lots of actors who are decentralized and distributed, working in different ways but able to share public communication channels like this.

The third reason I’m quirky in my use, I think, is that I make so many jokes on it. I’ve always been a guy that’s most comfortable at the back of the classroom making jokes. It’s not necessarily the part of my personality I’m most proud of, but it’s what I do. I’m happiest in the back, scribbling semi-related ideas to what’s going on, making jokes to myself or friends. Twitter gives me a pretty good way to do that sort of thing without being disruptive, and it’s fun for me.

I guess last is the fact that a lot of close friends also spend a fair amount of time on it, so keeping up with them and interacting with them there is fun and rewarding.

As I’ve moved up to 50k followers and past, I think it’s going to start changing how I use it a bit, for better or worse. It’s becoming somewhat more of a broadcast/audience thing and less of a group-of-friends thing. It remains extremely useful and integral to me, but probably will be so in different ways.

Anyway, enough for now — just thought I’d capture a few thoughts here that wouldn’t fit in 140 characters. 🙂

Jun 11

My Interview in Fast Company

Fast Company just put up an interview with me done by Kermit Pattison, and I’m really, really happy with it. It covers a lot of topics, including how I think about leadership & management (they’re not the same!), some lessons I’ve learned about how to be more extroverted, some things I’ve only recently started to really understand about some of the very important lessons I’ve learned along the way. Kermit did a really good job in capturing the essence of how I think about this stuff. Would love to read any impressions, reactions, arguments or otherwise that you have. 🙂

Mar 11

HCI:20 and me

This post is a little bit random — some reflections on my own past triggered by an event at Stanford — might be of general interest, might be of interest just to me. That’s sort of why I blog. 🙂

Anyway, a few weeks back I was lucky to attend HCI:20 at Stanford, a celebration of the 20th anniversary of the Human-Computer Interaction program at Stanford, started by Terry Winograd. There were a bunch of themes that I found noteworthy, and it was great to reflect on the origins and history of the program. And it was really fantastic to hear colleagues and friends of Professor Winograd talk about his contributions and impact over many years.

One of the first speakers talked about a paper Winograd published in January 1971 — coincidentally the month I was born. It was an AI paper on some work he was doing at the AI Lab at MIT — really focused on computers understanding human language. And that was Terry’s focus for quite a long while, doing work with Flores on computers and cognition. It’s amazing to think about that — that so much of the modern discipline of HCI and interaction design grew up from roots in getting computers to understand and communicate in natural language. It makes total sense, of course — that the same people who were trying to figure out how to get computers to understand how to interact with us are the people now trying to build more effective interfaces — the interfaces have just changed.

The line up of speakers was incredible — sort of a historical trip from then until now — here are a few:

  • Danny Bobrow & Stu Card (early NLP)
  • Fernando Flores (who Terry wrote Computers and Cognition with)
  • Eric Roberts (who worked early on CPSR and the ethical foundations of computing)
  • Reid Hoffman (trained in Symbolic Systems & philosophy)
  • Don Norman (trained as an EE and a psychologist)
  • Steve Cousins (robotics)
  • David Kelley (founder of IDEO, and the Stanford

So you see a journey from language/AI through philosophy & psychology and on to design thinking — in my view, that’s when things really started taking off. The foundations in linguistics and computation (not to mention ethics) were extremely important, but it was when iteration and design thinking got into the mix that the field really started gaining momentum and influence.

I started my own interest in HCI in about 1991, when the work was just starting to be oriented around design thinking (Bill Verplank from PARC, IDEO and Interval) and anthropology. The program had just started; I was probably a sophomore or junior at the time, and a senior friend of mine named Sean White kept telling me that I should look into it, that I would like it a lot. I kept brushing him off — I thought my path was going to be in (what I thought was the significantly more technical and higher impact world of) computer architecture design (RISC is the future!).

There were two events that were pivotal for me (beyond Sean’s good-natured prodding). [and a short aside here is in order — not only did Sean affect what my course of study would be, but about a decade later, having not been in touch for many years, out of the blue Sean sent an e-mail introducing me to someone named Reid Hoffman, then an exec at PayPal. No agenda, no motive, the note just said that he thought we might like knowing each other. That was the start for what’s turned into an exceptionally productive relationship — Sean profoundly affected my life a 2nd time!] But back to this story…

The first pivotal event was an internship I had at Sun Microsystems, working on graphics hardware. At the time it was obvious that Silicon Graphics was the important competitor and that hardware architecture was the important thing to work on. (Note to self: what seems completely, totally obvious today often seems pretty ridiculous in hindsight.) But fortunately that was a time when Scott McNealy was CEO and he really opened up the place to interns — he really encouraged us to poke around inside Sun, to talk with interesting people, and to generally make nuisances of ourselves. One of the guys I’m sure I annoyed was Bob Glass, a UI designer nicknamed “Dr. Bob” who had come to Sun from Apple to “drain the swamp or pave it over” — talking about the crappy UIs that Unix always had (especially) compared to Apple. Clearly, he didn’t really win that particular battle, but he framed an important problem for me as we talked in his office. He said this: “Who cares how fast the architecture is if nobody uses it?”

That single question, quite literally, changed my life.

I finally started to understand what Sean White had been saying all along, and started looking seriously into pursing HCI at Stanford. The second pivotal moment for me came shortly after, when I read an essay by Mitch Kapor making the case for software design as a profession.

After that series of events, I was pretty well hooked, and dove into learning everything I could about how to design systems that people actually wanted to use; software that made people’s lives better. I started working on my master’s degree at Stanford with Professor Winograd as my advisor.

There were only 2 courses in the curriculum at the time: CS247A, something like fundamentals of HCI, taught by Bill Verplank, and CS247B, something like using anthropological techniques to do needfinding, taught by someone who I remembered liking a LOT, but who I can’t recall anymore.

At that point, I became the Annoying Junior Design Guy, quoting a (complaining) Don Norman all the time, asking everyone why the clocks on their VCRs and microwaves were always blinking “12:00” and generally just bitching about how badly designed the world was. I’m sure I was a real treat to be around. But then I got involved in a few more classes that I just really loved.

CS447, taught by Terry Winograd and David Kelley, was a design lab affiliated with the then-annual Apple Design Competition — I learned a lot about how hard it is to actually make things that don’t suck. (Which, happily for everyone, subsequently reduced the amount of complaining about bad design that I did.)

I remember taking a class on Filmcraft in User Interface Design that pretty much blew my mind. The instructors of that class were Chuck Clanton and Emilie Young from First Person, a Sun spinout building a set top box that would fail, but would ultimately be the foundations of the Java programming language. They were really pioneers in thinking about how to use animation in computer interfaces — very early forerunners of the physics in the UI of today’s iPhones.

Many foundational elements were put in place by Professor Winograd and friends in the early nineties — but I think that maybe the most important was getting the IDEO folks, and David Kelley in particular, involved. It brought a human-centeredness to the work that we did and that Stanford taught, and a religion around iteration that has served the program well since, and paved the way for a lot of what the Stanford d.School is today. Winograd did all this stuff at a time when, especially among “proper” computer scientists it wasn’t very fashionable — but he had conviction and passion around the work — and of course he was right to.

From the vantage point of 2011, it’s clear that the work done by Winograd and the rest of the growing HCI group there is important and has had a large impact on creating thoughtful designers.

It’s also very, very clear that our educational system hasn’t produced nearly enough good designers who are technical enough and talented enough to build all the great products and companies that Silicon Valley (and the world) are trying to build. That’s not particularly an indictment of the educational system — we’re in a golden age of technology development — a sort of New Cambrian Age of personal digital life. There are so many new things to build, so many new areas of communication to explore and create, so many new interactions to create from whole cloth — as a society and as an industry, we’re going to have an insatiable appetite for great designers certainly for the coming decades.

We’re in a time now when everything’s changing; everything is up for grabs. I’m just incredibly glad that Professor Winograd and his colleagues had the foresight to set the foundations that we’re building on so quickly today. And personally grateful for Sean White pushing me to notice the things that were happening right under my nose.