CAROLINE O'CONNOR: Hi there. I am Caroline O'Connor,
and I met Jake and John a couple of years ago when I
was a designer in residence at Google Ventures. I was coming over from the
Stanford Design School, where I was a faculty
member, and I had a chance to see the sprint process
that these guys were running with startups up close,
and I was really blown away. It was a process that we
were teaching at Stanford, but I will say these
guys were getting amazing results with what
they had honed and refined. And so I'm really, really
excited to introduce them today to talk about the book
"Sprint" that they've written, which really lays out
a blueprint for running these kinds of sprints for
your teams and how it's useful and how it can help
in the bigger picture. [APPLAUSE] JAKE KNAPP: Thank you, Caroline. Thanks, you guys. So before we have a
little discussion, I wanted to talk to you
guys about where the sprint process came from and tell
you a little story about it. So it all begins back
in the year 2003, and it was that year that
my first son was born. And that's a picture of him. And I have to tell you
that when my son was born, I kind of freaked
out a little bit, because I had this
realization that he was going to be growing up. He was going to be a baby and
then a toddler and then a kid, and this whole sort of life
was going to be going on. And when I went
back to the office, I realized every day
I'm at the office, I'm kind of like missing that. I'm missing out on
a piece of his life. And so what I do at the
office should really matter. I should be making the most
of those hours and days that I spend away. And so I thought,
well, I'm going to take a look at
what I'm doing. And so I thought
about my work week, and I looked at my
weekly schedule. And it looks kind of like this. It was meetings. People would schedule
me for meetings. I'd say yes. I'd schedule other
people for meetings. They'd say yes. And if I ever wanted to
get anything important accomplished, I
had to kind of wind my way through this like
obstacle course of meetings. And I realized I
wasn't really doing sort of the best use
of my time by winding through those meetings and
checking email and just kind of churning a lot of the
time, so what I decided to do is get productive. And I read every book I
could find on the topic. I read "The Four-Hour Work
Week" and "Getting Things Done" and "Brain Rules,"
and I experimented with different kinds
of to-do lists, like five over the
course of a couple years. Pretty crazy. And over time, I
did get productive. In those little windows when
I had a gap between meetings, I would make the most
use of it possible. And a few years later, 2007, I
was at Microsoft in those days, and I left and came to Google. And when I came to
Google, I thought, man, I can't wait to see what the days
are going to be like at Google, because Google is this
like crazy new place. What's it going to be like? And it turns out, as
you guys might know, it's kind of the same. There are a lot of meetings. But I had honed these
sort of productive skills for those gaps between
meetings, and I knew how to get things
done, and I also found that at Google,
there was a real spirit of experimentation, and there
was a spirit of experimentation in the products but also
in the way people worked and the way teams worked. People were willing
to experiment with how they conducted
their execution on things, and I've started to do a
new kind of a quest to make the team process better. And I imagined, what if
I could just sort of wipe the week clean, wipe
the calendar free of all those meetings,
and start from scratch? And so I started doing
day-long workshops with teams and then a couple
of day-long workshops, and eventually ended up
with this week-long process that I call a sprint. And that's what we're
going to talk about today. And I took this
sprint process in 2012 to Google Ventures,
which is now called GV, because since the
alphabet thing, we've lost all of our letters. So we're just GV,
and at GV we have this interesting
challenge, because we're investing in
startups, and we want them to be as
successful as possible. If you run a startup, you've
got all kinds of ideas, but it's hard, because
you don't know which ideas you should focus on. It's hard to know. Some of these might be big hits. They might make
you a lot of money. They might be really successful. But a lot of them are worth
pennies on the dollar, and some of them are
downright dangerous. But up front, you don't know. They're just a bunch
of question marks. They're just a bunch of ideas. And so this idea in startup
land, in Silicon Valley, but really around
the world has caught on that you should take an
idea, make your best hunch, and just get data about
it as fast as you can. The trouble is that getting data
requires you to usually build something and launch it. And as most folks who have
gone through this cycle know, building almost always
takes longer than you think it's going to take. And so that knowledge
that the build process will probably take-- if we
think it's going to take weeks, it'll probably take
months, and if we think it's going
to take months, it might take years--
tends to make us more cautious about
the ideas that we try. And we argue more about
which hunch we should follow. So the sprint gives
you a chance in a week to collect data
quickly, and we found that that was a really valuable
technique for our startups. They could try riskier ideas
and try them a lot faster. So that's what I'll tell
you guys about today. And the process is
broken down day by day into five big steps with a
lot of little steps inside. And rather than just tell
you about how it works, I'm going to tell you a story. We've done over 100 of these
with the different startups in the GV portfolio,
and many of them are ones that you
might have heard of, but I'm going to tell you
a story today about one you may not have heard of. It's a company called
Savioke, and Savioke, when it was first-- well, this
is their mission statement, but when it was first described
to me by my colleague, John, who you'll meet
in just a moment, he was like, these
guys make robots. And we've totally got
to try to find a way to work with these guys. And luckily as it
turns out, they had a really pressing
question that they wanted answered right away. And so it was a perfect
time to do a sprint. So this is the
founder of Savioke, this guy named Steve Cousins,
and Steve is a robotics expert. He worked at a place called
Willow Garage for years and years, and he
left Willow Garage with the idea of building
robots for the service industry, robots that would be helping us
in our sort of everyday lives. He wanted to take robots out of
this kind of abstracted world of these really high
tech expensive robots and make ones that we'd
would have in everyday life. So he built this team of
roboticists, engineers, designers, and they decided
to build their first robot for hotels, which might seem
like kind of a funny choice, but if you think about
working at a hotel front desk, your day might look
something like this. This is a diagram that
I just sort of made up, but I think it's
probably fairly accurate, because if you imagine the
morning times when people are checking out and unpacking,
moving around, the evenings with people checking in,
there's a lot of activity during those windows at the
front desk with people who are there physically,
and then also people are calling from the rooms
and asking for room service, asking for a toothbrush,
a towel, some extra thing. And so it's really hard
to staff those bikes. So Savioke thought, If
we built a robot that could make some of
those deliveries, we could let the folks
at the front desk stay there, be focused there,
do a really good job there, and then we could also make
sure that people in the rooms got stuff they needed. So for the first five
months of their existence, Savioke builds these
progressively more sophisticated
prototypes until they've created this thing, which
is called the Relay robot. The Relay is a robot about
the size of a trash can. It is self-driving, so it
can sort of autonomously navigate the hallways. It's got a little hatch on
the top with a locker in it, so at the front desk, you could
put in a towel or a toothbrush or whatever, close it up,
type in the room number, and then the robot would drive. You can call the elevator,
ride the elevator, and make the delivery
right at the room door. So all that was great,
and they had worked out this pilot program with the
Aloft Hotel in Cupertino. And they were going to take
their first-- they just had one robot that
worked, and they were going to take that one
robot and put it into service, and for a fledgling
startup, this is like a great opportunity,
but also a really big risk. They wanted to make
sure they got it right. And it was going to start making
deliveries in just a few weeks. But they still had one really
big question left unanswered. They weren't sure exactly
how the robot should behave. It's a really good question,
because if you guys think about, say, Isaac Asimov--
I don't know if anyone here is familiar, if there's
any science fiction nerds in the audience, but
I'll confess to being a science fiction nerd. There's this really
sophisticated idea about how robots
should behave, and it's deeply ingrained
into the science fiction that has come ever
since Asimov wrote these down. But as Steve told
us, look, this robot is not that sophisticated. It's not going to be
thinking about much. It's just going to
be delivering things, and it can't have a
conversation with you. He said, we've really been
spoiled by Wally and by C-3PO, and we think that robots have
sort of thoughts and plans and hopes and dreams,
and the reality is that this robot
won't be able to have any kind of a
dialogue with people, and they were really
concerned that people might get frustrated or
disappointed if they tried to interact
with the robot, ask the robot to do
something, tell it something, and got no response. So in order to play it safe
with this launch coming up, the safest thing to
do is just make sure that the engineering worked
well-- they had done that-- and not give it a
personality, not do anything that might ruin
all that other hard work. But they still kind of
wondered, because it seemed like this
interesting opportunity, and they had a bunch of ideas,
but they didn't have a good way to test them. So we did a sprint
together with them. We got to be there in the room
while these guys hashed this out over the course of a week,
and what happened is on Monday, they brought the robot into our
building here in Mountain View, so we could do the sprint there. They rolled it in. Here, it's covered
up in a sheet as though a ghost might be less
conspicuous on the Google campus than a robot. And brought it in, and as you
can see here, it kind of looked like a printer when they
first took the blanket off, and we were like,
well, OK, you know. But then we saw
it driving around, and there's something kind
of amazing that happens, because this thing can-- it sees
you, and it kind of cautiously moves around you. And they had really engineered
that motion exquisitely. It seemed very dog-like almost,
and so we fell in love with it. And this is possibly the
first robot selfie ever taken. But we just thought
like, this robot is-- OK, let's try the personality. We got excited. So on Monday, the first job
in the sprint of the team is to share all the
information that they've got and make a map. And then on that map,
pick the best spot to focus for the sprint. So we decided that
the moment of delivery was both the biggest risk
and the biggest opportunity for personality, because
up to that moment, it's possible that you
never saw the robot. You didn't even know
that the robot existed. You asked for something to be
delivered, and lo and behold, you open your door. You're wearing god knows
what, and there's a robot at the door of your hotel room. So what we did then on
Tuesday was to come up with a bunch of solutions. How might the personality work? And when we do this, instead
of having a big group brainstorm where everyone's
shouting out ideas, we instead have everybody
sketch on their own. And I mean everybody,
so in this instance, it's not a bunch
of just designers or product folks working on it. It's everybody on
the team, and when I say we're working
quietly, it looks like this. It's kind of boring. Everybody is in great detail
sketching out their solutions. So then by Wednesday,
we've got all these different competing ways
for how the robot might behave, and we decide from
all of this-- I think we had 10, 15
different approaches-- to focus on three big ideas. So first, we've got giving the
robot a face, which obviously carries some risk, because it's
going to promise personality. We thought about a lot of
different kinds of faces, and we ended up going with
this kind of sleepy robot that sort of matched that
dog-like impression we got from the way the robot
moved, and we thought might suggest that you
can't really talk to me. I'm nice, but you can't
really talk to me. We wondered if people would like
to play games with the robot, and we have this idea
of the robot doing sort of a happy dance after
the delivery was completed. So on Thursday, we had
to build a prototype, and we'd spent the
first three days kind of getting all
these ideas on the table, making all the choices, and now
we had to build a prototype. And we think it's really
important that the prototype be realistic so that when
we test it with customers on Friday-- that's
how the sprint ends-- we'll be able to
trust their reactions. So we put together
sound effects, sort of a sound effect library,
and we divided up the work. So here's somebody from Savioke
working on that sound effect library. Here's the face,
designing the face, and then just putting it
on an iPad Mini on Keynote, and we replaced the panel
on the front of the robot with an iPad
temporarily, because it would look realistic. And finally, the
robot choreography. So normally, the robot is
running completely on its own. It's all done with sensors. It's all programmed out,
but as you can see here, Tessa's got a remote control
like a PlayStation remote. And temporarily, we could do
sort of a Wizard of Oz thing with the choreography. It only needed to work
for five deliveries for us to be able to get
some data on it. So finally, it's Friday. It's time for the test. We're going to find
out what happens when you expose this robot
personality to people. And what we did, and
what we always do is to do these
one-on-one interviews. So in this case, our
partner, Michael Margolis, came down to do the
tests, and here's a photo of him in
front of the hotel. And he comes in at 7:00
in the morning and starts sort of rigging the
room up for the test. So he's got a suitcase,
and inside a suitcase, he's just got all kinds of gear. He's got computers. He's got tripods. He's got duct tape. I don't know how he gets
through airport security, but he does consistently. And with the help
of the Savioke team, they're making sort of
a makeshift research lab out of the hotel room. So they're wiring
up these drop cams so that we'll be able to see
as the robot kind of moves into position. And then at 9:00 AM, it's
time for the first test, the first one-on-one interview. So I want you just for
a moment to imagine, put yourself in the shoes
of this first customer who shows up, and
earlier in the week, you responded to
an ad on Craigslist for a usability interview. You've filled out this
survey, and you get this email from Michael, and
it says, hey, this is going to be in a
hotel room, and you're like-- this is like
Wednesday to you. You're like, OK, I
guess I'll check it out. And then on Friday, like
actually, you're like, oh, god, kind of cursing yourself. You're there in the lobby,
and Michael shows up, and he's like, OK,
so come with me. We're going to go
up to my hotel room, and you're in the hotel room,
and there's all these cameras, and you're just like-- you're
probably a little weirded out. But I mean this just
goes to show that even under the most potentially
awkward of circumstances, you actually can get good
honest reactions from customers, because what Michael does is
he puts the customer at ease. So he's wearing his work
badge in the hotel, which doesn't totally make
sense, but it does show that he is who he says he was. He's got a clipboard. His body language is
even very careful, and he's asking questions
of the customer. He's saying, tell me
about your hotel routines when you check in. Where would you
set your suitcase? And if you found that you had
forgotten your toothbrush, what would you do? And they say, well, I guess
I'd call the front desk and ask where to get one. And he says, OK, I want
you to go ahead and call the front desk. Here's the number. So this first person,
she calls the front desk, and this is actually the phone
number of Allison from Savioke, who says, oh, sure, I'll
send a toothbrush right up. So meanwhile we're back
watching over video, and we can see on
all these, it's kind of like this
like FBI setup. He's got all these cameras. And we know that the robot
is coming into positions from inside the room
she doesn't know. And the robot drives
into position. She opens the door, and
we can watch the reaction to the personality. We can watch her take
the toothbrush and see, does she talk to it? What happens? And so we get to watch this
five times throughout the course of Friday. We get to see five totally
cold reactions to this robot, and we see what happens. And it turns out that
five is enough to give you these big patterns of things
working and not working. So it turns out
that nobody wanted to play games with
the robot, which is good, because that would have
been a lot of engineering work. We were able to sort of
cut that right off the bat. But the face worked. No one also had any
inclination to have a conversation with the
robot, to ask the robot to do something verbally. That didn't happen. So the face was a big success. And the dance even worked. They even found the dance, which
when described in the abstract, didn't sound that
great, honestly, but it was quite
delightful in real life. This is a little video of what
the robot looks like now out in the wild, and
Savioke launched with this sort of
robot personality, and they now have more
orders than they can fulfill. And as you can see, it
has very simple eyes. The dance is really
just kind of rotating, but it all gives this
kind of delightful feeling to the robot. And it turned out
to be something that guests absolutely loved. So the idea with the
sprint is, of course, to not make us be so cautious
about taking these big bets, to not water down our
ideas or play it too safe. It might sound a
little bit corny. You can almost be
true to your vision, because you know that
you're not betting the whole farm on everything. At worst, you'll
embarrass yourself in front of five customers,
which is nice to know. That's the idea behind the book. There are stories about Savioke
and many other companies in there, and it's also
kind of a DIY guide. And yeah, let's talk about it. You guys want to come on up? CAROLINE O'CONNOR: So
tell us about why does GV have a design team? Like what's the goal of having
a design team on a venture capital? JAKE KNAPP: We're trying to
slip by as long as possible. Well, the big idea with GV
is to invest in companies that we're excited about. We're excited about the
businesses they're building. We're excited about
the technology that they're building, and
we want them to succeed, but we want them
to succeed in a way where we make a return
on our investment. And so we actually see design
as kind of a strategic advantage for us. So if you think about a
company that's starting out, and they're starting to get
some traction on their business, they've got this really
big idea about where they can go with the technology. They've got to, in order
to get there, build this kind of bridge
between their ideas and the reality of the real
world, the reality of what their customers will want
to use, be able to use, fit into their lives, and
design, as we kind of saw in this story, offers this way
to really quickly prototype ideas, understand
how they'll actually work in customers' hands. And we find that it's kind
of our secret advantage over if those companies
were just sort of doing things on their own. CAROLINE O'CONNOR: Yeah. So given you guys have
hundreds of companies in the GV portfolio. JAKE KNAPP: Over 300. CAROLINE O'CONNOR: So I
imagine there's a ton of things that you could do to
try and help them out. How did you come to sprints
as kind of the primary way to help the portfolio? JOHN ZERATSKY: Well, we
kept investing in companies, and we were like, oh, we can
help all these companies. It's not quite like
that, although it's not that far from the truth. But I was at GV before Jake
joined us, and had a couple people on the
design team, and we had worked as
designers and writers at other startups in
other parts of Google. And so we kind of
had this approach where we would go from
company to company, and we were sort of the experts. Or they thought we were experts,
and we thought we were experts, and we kind of felt this
pressure and this anxiety of needing to have the answers,
to go into a company and say, this is what we
think you should do. But it became clear that
given the breadth of companies we were investing in and
the different challenges that those companies faced,
that there was no way we could have all the answers. Nobody could have
all the answers, and so we began looking for,
instead of the answers, a way to find the answers,
and that was really where the sprint
process became, we thought, really powerful. So we didn't have to say,
well, when I want YouTube, this is what I did. And maybe it'll work for you. We could say, hey,
let's work together, and let's figure out the
answer to this big question that you're facing. CAROLINE O'CONNOR: Awesome. So you guys have had a
chance to apply this process with a lot of different
kinds of companies, small startups, big
startups, but also you've done a lot of work here at
Google, very big organization. What have you guys seen
in terms of pitfalls that are maybe common
to teams or things that can be blockers
for them generally, and how this solves it,
or what can block them from doing a sprint well? JAKE KNAPP: I think
one of the struggles is having discussions
in the abstract. John calls this getting
stuck in abstractopia, where you know when you're going
to have to build something that it's a big decision. And so we debate. We wave our hands
in conference rooms and try to anticipate how people
will react in the real world. And that's tough. That's an energy
drain, and it often doesn't yield the
best decisions. So part of the
reason for the sprint was to add a little
bit of structure to those conversations
and to use some of the things that actually
work really well in design. Design is a technique
and a set of skills that can enable people to
make things real really fast. There's a whole idea
of critique and design. And design is usually thought
of as this kind of kooky, creative art that people don't--
if they're not designers, they kind of walk quickly past
the design room and think like, it's like those guys are playing
D and D in there or something. But in the sprint, we've tried
to make those things very practical, very accessible
to everyone on the team, and then use them to
short circuit a lot of those pitfalls. CAROLINE O'CONNOR: Awesome. And how do sprints fit
with what everybody thinks of as like regular work? Like the calendar
that you showed that we're all too familiar
with here at Google-- is it something teams should
be doing all the time if they're working well, or
when do they bring this process to bear? JOHN ZERATSKY:
Yeah, a lot of times we think that that
sprints make sense as sort of the kickoff or
kind of this initial burst for a new project
or a new initiative. So if a company wants to
reach a new kind of customer, or they want to introduce a
new feature, a new product, sprints are a really good
way to sort of kick that off. But they're not meant
to be sort of the way that you work all the time. At the same time,
there's a lot of ideas that are part of the sprint
that we think are really valuable to use sort
of in day-to-day work and day-to-day life. One of the examples is
sort of about the way that time and activities are
so structured in the sprint. For example, I do
all of my meetings on Thursdays and Fridays, so
a lot of the work that I do, I'm writing, I'm designing, I
need uninterrupted work time. So I actually leave
Monday through Wednesday open for those things, and
then Thursday and Friday are the days for meetings,
so being very structured in that Jake and I are very
sort of aggressive about-- JAKE KNAPP: It's sort of weird. JOHN ZERATSKY: We're sort
of weird about limiting distractions, so Jake
introduced me to this idea that he came up with of the
distraction-free iPhone. So uninstalling Twitter and
removing your email account from your phone, so you
can't check your email, and even-- it sounds
nuts-- but even like disabling Safari
in the restrictions and the settings for the iPhone
so that you don't have access to this sort of unlimited
pool of potentially very interesting but ultimately
distracting information that exists in the world. JAKE KNAPP: You
can see the picture that's being painted here. Fundamentally, John and I
have very poor self-control, and if left to our own devices
in a typical work week, we would be checking
Twitter and our email just continually over and over again. And the sprints and then some
of these other kinds of methods are ways for us
to put rails on it so that it's easy to do
the right thing, the best thing with our time. JOHN ZERATSKY: But
I think usually what happens with
the teams that we work with is that
they'll use sprints as sort of the kickoff
for some new thing that they're working on. And a lot of times they'll do
two or three sprints in a row, so the first sprint is five days
the way that Jake described, and the second sprint
is usually shorter. Usually the prototype
that you built and tested, there may be some
problems with it, some things you want to
fix, and then there's some other things
that went really well, that worked out really well,
and so the team will then do a shorter sprint where
they tweak that prototype and test it again, and
then maybe the next week, they do another
sprint, where instead of tweaking that
existing prototype, they create a different
kind of prototype, something that they can test
with live traffic or do a different sort of more
quantitative kind of test. CAROLINE O'CONNOR: Awesome. Can you talk a little
bit about user testing as a forcing function for teams? You guys have that
set on Friday, but talk a little
bit about how that can help with the distractions. JOHN ZERATSKY: Well, the
sprint is actually just an elaborate scheme to get more
companies to do user research. JAKE KNAPP: Well,
it's also-- I mean it does get back to that idea
of me being a procrastinator. And I did have this
realization that when I had a deadline, like
many, many procrastinators, I suddenly got really productive
right before the deadline. So if you decide you're
going to run a sprint, and then on Monday
you say, these are the customers
we're bringing in. You start recruiting them. You know that five strangers
are coming in on Friday, on Thursday, you'll
be like, you don't want to embarrass yourself. So you will get
really productive. And there's all
these good reasons for bringing in customers. It makes you focus on those
customers in a very concrete way throughout the
week, so you're not just waving your hands,
but you know these are the people who are coming in. And it also gives
you data right away. It's the very
fastest way that you can get some data on a
complex idea like the ones that we put into a prototype. But that forcing function is a
very real, very powerful part of having people show up. JOHN ZERATSKY: Part
of our motivation for sort of building the sprint
around customer interviews specifically is
that the startups that we work with and
imagine a lot of the teams that you guys are on here,
your center of gravity or your sweet spot is writing
code and shipping software. That's what we all
know how to do. And so there's a
tendency for that to be the thing
you do when you're trying to figure out something. And you see this in sort
of like the lean startup and different approaches like
that like I'll create an MVP, create some basic
version of the product that you can get out
there and you can get data as quickly as
possible, but we think there's this amazing
shortcut, which is creating this realistic prototype. And then instead of
having to launch it and having to analyze the
data, just watching people react to it. We think it's not a
replacement for sort of quantitative
testing and launching something in the
wild, but it gets you a different kind of data. It gets you this really rich
qualitative understanding of which things are
working and which aren't. And the best thing is you can
do it in a few days instead of weeks or months. JAKE KNAPP: You can try a
sprint once, do that experiment with your team, try
working in a different way, and it all comes
with the sprint. And then you can
see how it works. CAROLINE O'CONNOR: So for
partners at a venture capital firm, you guys have pretty
unusual backgrounds. No top hats or monocles
that I've ever seen. JAKE KNAPP: We left
it at the door. CAROLINE O'CONNOR:
Can you tell us a little bit about how you
came to be partners at GV? JAKE KNAPP: Well, that's
a very long story, Carol, but I'll begin at the beginning. I won't begin all the
way at the beginning, but I studied art and
painting in college, which was, generally speaking,
not helpful for any of the work I did after that. But I've always been interested
in computers and making things, and when I was a kid, I would
make games on the computers and test them on my
unwilling friends not knowing that
that was basically what I'd end up doing sort of
forever, only not with games. But for me, the process
that led me to GV was, ironically,
not about having a ton of startup expertise. I've learned a
lot about startups and built that by being at
GV, but actually just having this sort of interest
in figuring out how teams can use
design and how teams can make better use of their time. So it was the
opportunity at Google to really test that out and
experiment over and over again and make the process
better that gave me the experience
that I needed to be able to start doing that here. But you have another very
different road to GV. JOHN ZERATSKY: Yeah. I got my start in journalism. I was actually
introduced to design by working at a newspaper,
and so I sort of accidentally found my way into this
really interesting kind of design work. So every day I would come in. It was a daily newspaper. I would come in, and
the editors would say, OK, here are the stories that
we have for today's paper, for tomorrow's paper, I guess. Here are the photos
that we have, and here's sort of the priority. These are the most
important stories. These are the least important. And so I'd kind
of do this puzzle of putting the paper
together and figuring out how everything fit. And then it was a paper
that was distributed on a college campus, the
University of Wisconsin. So the paper would
be printed, and then I would go to
class the next day, and I would actually watch
people read the paper. So I was like doing usability
testing without really knowing it, and in a lot of ways, that
experience kind of crystallized my approach to work-- trying
to get in a lot of reps, trying to create environments
or seek out environments where I could get in a lot of
these little loops of trying something and then seeing
how it worked with customers, and then fixing it and making
it better the next time around. So after that, I worked
at a startup called Feedburner as a designer. We were acquired by Google. Worked at Google in
Chicago, worked at YouTube doing sort of product design,
UX design work all along, and I was attracted
to GV for a lot of the same reasons as Jake. I wanted this
opportunity to work with so many companies working
on so many different things. I saw it as being sort of like
being at the newspaper again. I knew that I was going
to have these loops. I was going to have
this opportunity to try things, make them
better, and try them again. CAROLINE O'CONNOR: Awesome. So the question is,
how do you make sure that the small sample of
users that you're getting relates to the larger
sample that you're going to be working with? JAKE KNAPP: Well, one
of the great things that happens in the course of setting
up the customer interviews is early in the week,
you talk about all of the different customer
types that you have. And you figure out, OK, we're
going to focus on this group, and what happens when you make
that decision is you start to get very real about
what defines that group, because if we want
people to come in who look like that, who
use that kind of software, have that kind of job, whatever
it is, then you realize, well, in order to get
them, we're going to need to post an ad here or
contact our network of folks here, and we're going to
need to screen people out. We're going to need to
recruit a lot of people and put together a survey,
where we ask them questions, and we want the results of
those questions to tell us, is this our customer or not? And it's that survey,
that sort of screener that we use that helps
us make sure we're getting the right people. But that exercise,
having to do that is something that we find many
teams at all kinds of companies small and large defer. They talk in sort of broad brush
strokes about their customers, but the sprint makes
you get very specific, because you want to make sure
you get the right people in. JOHN ZERATSKY: Yeah,
and that exercise on Monday of creating the
map is also really helpful, because you avoid the temptation
to think in terms of personas and what kind of person it
is, and you think more about what situation is the person
in, and what are they doing? What's the task they're
trying to complete? And that's sort of that--
Jake showed that big map and then zooming in on that
one point, so in that case, it's a traveler who is
checking into a hotel and realizes that they
forgot their toothbrush. So you can look for
people who would likely be in that very
specific situation. JAKE KNAPP: To give you
a concrete example, when I worked with Slack, who's a
sort of messaging software, they were interested
in finding better ways to explain how Slack
works to companies who aren't tech companies. Slack's grown tremendously
with tech companies, and they're figuring
out how to explain it to other kinds of teams. And so they knew that
they wanted to expand to other kinds of teams. In the sprint, they had to get
really specific about which kinds of teams, which
are the best example of representative teams? Is it a team who's in media? Is it a team who's
in health care? What should we look for? What questions should
we put in the survey to find those people? CAROLINE O'CONNOR:
So the question is we have so much data. Do you want to be
looking at the data that you've got and digging into
that before you do a sprint, or do you do the sprint earlier? How do you think about that? JAKE KNAPP: Yeah. So there's maybe three
things I'll remind myself, there's three things
I want to mention. The first one
actually is that you should be really careful
of three-day sprints or anything short of five,
because usually the first thing people cut when they make
the sprint shorter is that realistic
prototype and the test. So I do know there are some
folks who will compress all the steps into three
days, and then you have to look out
for like dehydration of like passing out,
because it's pretty intense. But if you actually are
doing it at the normal pace, you just want to
be really sure you build that realistic
prototype and test it. Otherwise, you might not know
if you have the right idea. JOHN ZERATSKY: Yeah,
you can cut out days, but you can't cut out steps. Or you shouldn't cut out steps. JAKE KNAPP: Yeah. That's a nice concise
way of putting it. But in terms of when
you have a lot of data, knowing what to do
with it, I think John and I both came from
working at Google where there was tons of data. John worked on YouTube. I worked on Gmail,
and obviously, those are products where
you launch something, and you get a lot of data
about what's going on. But even when you
have a ton of data, it can be very hard to know
why something is working or not working. And so we certainly
have the experience of working for a long
time on a new feature, launching it, and seeing that
people were using it or not using it, but not
knowing why still. And that's part of
the thing that you'll get with the kind of
qualitative research that you do in a sprint
when you're doing interviews is that you'll know why. You'll be able to see
people actually doing it, and you don't have to just
guess what the numbers mean. In terms of when
to do the sprint, I think what we really
look for is a big question. And so Savioke is honestly
among all of our sprints, it's more unusual for us
to work with a startup when they're so close to launch. It's not crazy or
unheard of, but it's more common to do
it at the beginning when you have that big question. You're starting off on
a totally new product or a big new feature,
or maybe it's the start of a new
quarter, and you know that you're going to be
putting a lot of effort in. You just want to check
where things are. But it's that feeling of, gosh,
we're making big decisions, but we don't know for sure. We're arguing or we're
scratching our heads. You can satisfy
that with a sprint. JOHN ZERATSKY: Yeah, I would
say that to your question about sort of bringing
data into the sprint and how to incorporate
that, having a lot of data to work with when the sprint
starts is actually a luxury. It's actually a great thing. And the sprints that I
can think back to that were the most successful
started with a lot of data. Either we did a round
of customer interviews, or we had a lot of
quantitative data about how people were
using the product or how effective
the marketing was. It does take some
work to distill that and analyze it and present
it in a way that makes sense, but that's always a challenge. So what we'll typically do is
we'll invite sort of someone from a data team or a product
manager or, in our case, a lot of times, it might
even be the CEO or something, because we're working with
these really small companies. And they'll give us on that
first day of the sprint a half an hour or an
hour sort of run-through of who's overseeing. Here is maybe the conversion
funnel, or here are the usage patterns in the product, and
this is what we know so far, and this is what we're
trying to figure out. CAROLINE O'CONNOR: So
is five days really the number for a sprint? Could you do it in one day? And how do you think about sort
of cutting it down or playing with time? JOHN ZERATSKY: Well,
we've experimented a lot, so that's kind of
the first thing is we're pretty confident that
it's the right number of days. But I like your question about--
you asked, what is the essence? Like if you were going to sort
of slim down the sprint, what would be like the absolute
essentials to not get rid of? And I want to hear
Jake's answer. JAKE KNAPP: Well,
you definitely don't want to get rid of that
realistic prototype and the test,
because what happens at the end of the sprint
is you run that test, and then you're seeing
how customers react, and then that's surprising. Ideas that seem so
brilliant on Wednesday turned out to be duds, and
then something that maybe you just took a risk on, and one
thing we didn't talk about so much is that we'll sometimes
make two or three competing prototypes and see how
they do head to head. And maybe it's that riskier
idea that turns out to work, but you don't know that until
you've shown it to customers. And the other
things you get when you have a realistic
prototype is for the team, it's this concrete--
it's like you fast forward it into the future. What if we were done
already with the product, and it looked like this? And that's really helpful. It's helpful for the team to
decide if the product seems to fail with customers,
do we have belief that this artifact is
something we can make better? And we're just going to
try to make it better in another sprint,
or are we off track? This wasn't as great
as we imagined. JOHN ZERATSKY: Yeah, I think
a lot of the short sprints that we've seen or
that we've heard about, for whatever reason, they
tend to be oriented or sort of weighted toward the
early parts of the sprint. So they tend to be more
about coming up with ideas, and I actually think
that coming up with ideas is usually not the challenge. People are constantly
thinking of ideas, and in fact, we found that
the ideas that people come up with in the sprint are often
not as good as the ideas that people have
had kind of rattling around their brains for
the last months or years. And I think it's because
the ideas in the sprint are sort of abstract,
they're unrefined, and they're new, whereas
the existing ideas have been through the
wringer, so to speak, of kind of thinking about
them, working on them, considering them. JAKE KNAPP: Just to
be clear, we make sure people put those old
ideas into [INAUDIBLE]. JOHN ZERATSKY: Exactly. Yeah. JAKE KNAPP: They're
on the table. But I think then you can
take-- the essence of it also is to-- we're sort of
geeky about human interactions. John and I are a
little-- I don't know. We're sort of pod people. But we think that if you
can put some structure around the things that we often
do at work with no structure, that it helps a
lot, and so there are a bunch of little
pieces in a sprint that can be helpful at any time. It can be really
helpful when you're having and you're starting
to talk and have evolved into a brainstorm in a meeting
to say, wait, hold on a second. Let's all quietly
write down our ideas. All of a sudden, you give
the introvert or the person who's not so good at
pitching their ideas, you give them a chance to
have their idea be on a level playing field with
everyone else's. Sometimes it's helpful to
just put the ideas on the wall and vote, the kind of thing that
we do in a sprint all the time. It helps you shortcut
sometimes a discussion that isn't necessary. And the idea of
talking to customers-- putting what you have
in front of customers is something you can do. Even if you're not
running a sprint, you can test your product
and put it in front of people and start to answer
those why questions. CAROLINE O'CONNOR:
So the question is, how do you go from 17
ideas on paper to the few that you're going to test, and
how teams navigate that well? JOHN ZERATSKY: That's
probably the most like robotic, scripted
part of the whole sprint. And for good reason, because
it is very challenging. I guess we sort of talked
through the specifics, but I'll lay out the
high level, which is that we have a very
particular set of voting exercises that we do starting
with what we call a heat map, which is where everybody
just sort of looks at the ideas and has these small
colored sticky dots. They get effectively an
unlimited number of them, so if they spend them all,
they can get more, and just put a dot on anything
that seems interesting. Then we do what we
call a straw poll sort of round of voting,
where people get larger dots, and they get a limited number. But it's non-binding,
so you're sort of going around the room
looking at the ideas and voting on the ones
that seem most promising. And then we do a
really fast critique, so the group together talks
about which ideas they thought were the best and which ones
maybe are problematic or not as interesting. And then one of the more
sort of unusual steps is that we then rely on the
decision-maker in the sprint to actually make the final call. So the leader, the
executive, the CEO, whoever it is-- that person
gets to sort of absorb all the work and all the ideas
and all the opinions that have been shared in
the room, and make the final call about
which prototype or which prototypes
to build and test. JAKE KNAPP: One of
the ways that we think about this is if you
think about your phone. Like every morning, if you
charged your phone overnight, you wake up, the battery's full,
and then throughout the day, you do stuff, and the
battery wears down, unless you have a
cooler phone than I do. But that's what happens to me. Our ability to make decisions
is kind of like that, so we wake up in the morning. We've got like a full battery
of decision-making ability, and then as the
day goes on, if you have a lot of intense
conversations, if they're 17 ideas,
and you're trying to narrow them down to three
by like arguing the whole day or like pitching one
versus the other, like your battery is just
going to go [SHRINKING NOISE]. And it's not going to work. And so in the structure that
John described, what we're trying to do is make
those decisions as effective as possible
so that every time you burn a little battery,
you're making progress. And to do that, we
cut out sales pitches. We make a lot of the
evaluation silent. We make the sketches
anonymous, so you're not evaluating who made it. And then ultimately,
when it starts to be the hardest
part of the decision, we turn it over to that
one person who we know will make an opinionated call. So we don't have to
worry about group think. We don't have to worry about
watering down and getting to a consensus. We're just saying like,
OK, look, decider, you make the call, and
then we'll find out. Right away we'll get data,
so you can take a chance. CAROLINE O'CONNOR:
So the question is, how do you modify
this for situations where humans are not
that end consumer? JOHN ZERATSKY: Well, I think
that is a really interesting question, and I don't think
we've done a sprint like that. JAKE KNAPP: I'll
give you an answer that-- I'll start
with a bad answer to that question, which
is that for most things-- and you may be talking
about an exception to this-- for most things,
there is some point at which the product does touch a human. And it might be downstream. It might be that you're building
a back end service that, in turn, supplies something
to another service, and in turn, the place
where it touches the human is somewhere downstream. But fundamentally,
humans are the problem. Whenever anything,
you make something, and it doesn't work well,
or there's a problem, or people don't like it,
it's that surface area where the human touches it
that's usually the unknown. And so we often
think, well, where is that human touch point? What is it? It might not be like
a software interface. It might be a
sales conversation. You might be building
back end that's going to supply functionality
with the expectation that we'll be able to
sell this to a third party or that it'll enable us to
do some new kind of query that people will want,
searches that they'll want to run or something. I'm just making this
up, but that idea often helps us get to the root
of the question, which is about what will people do? People are the ones who
are hard to predict. Now if that doesn't
work, the answer that's slightly better, but
I'm waving my hands a lot more, is that you try to prototype
what you would be supplying if all of the coding was done. And you say, during our day when
we're prototyping on Thursday, we're going to be
faking that part. And then we're going to be
somehow manually supplying that to whatever's on the other
side and seeing what comes out. I'm not a back end
developer, so you're probably just laughing at me, but I
think that might be the idea. OK JOHN ZERATSKY: I
was going to suggest that she change the test to
try to come up with-- the test that we do, our
customer interviews, but there might be
some other kind of test that you could think up. I'm also waving my
hands, but there might be a different way
of sort of validating whether an idea or an
approach is any good or not. JAKE KNAPP: A big
part of the magic is that constraint
of five days, and we know that the
steps in the sprint will provide us rails
to evaluate ideas, make decisions, and then
very quickly, try something, and it might not be
right, but for me, there's so much frustration
when you spend weeks or months doing
that sort of discussion and trying to decide. And so yeah, it might be just
a different kind of test, and you commit to
doing the sprint to have something
at the end, so you can see what it might be like. CAROLINE O'CONNOR:
I have one more I wanted to throw at you guys. You talk a lot about the value
of having the whole team watch the user testing and really
see it for themselves and be connected to it. I found that to be really
challenging at Google given the calendar situation. And so then you end
up with challenges to a data set of
[INAUDIBLE] of five, that that can't
possibly be valid, and especially for folks who
haven't sort of seen the user test. Do you guys run into that? And if you do, how do
you deal with that? JAKE KNAPP: Well,
that idea of having everyone watch is
really important, and one of the reasons
why it's important is that if everybody
can't watch together, then it often falls on
one person's shoulders to conduct the interviews
and then communicate what he or she saw in those
interviews to everyone else. JOHN ZERATSKY: And that person
has to be really convincing. JAKE KNAPP: Yeah. Yeah, incredible, and
it's a lot of work, and there's a time delay to
put that information together. So if you get
everybody in the room together on a
Friday watching it, this magical thing happens where
there is no argument about what people saw, and there's
also-- we've never had a sprint where after
watching five interviews, people, no matter how much
hard core data nerds they were, didn't say like, oh, yeah. It's clear what the
big findings were, what the big patterns were,
because by that fifth person, you're just starting
to see the same things, things that you saw
earlier happen again. There's nothing new. And I don't know if you
want to talk about Y5. JOHN ZERATSKY: Well, I was just
going to add something and talk about one of the
mechanisms that we use to sort of make sure
that people stick around for the research and actually
watch the research is by making sure that the
people in the sprint all participate in the
creation and the selection of those ideas so that
everybody is sketching, everybody's involved in the
decision-making process, everybody's involved
in the prototyping, and the reason
that's important is that then those people
really want to see what's going to happen in the test. It's like people
are then into it. They're in suspense. They want to find out if their
ideas are going to work out or not. So by making sure that
the entire team is going to participate, even if that
means a smaller team, a smaller core team who's going
to be really focused, it creates an incentive
for those people to stick around for the end. JAKE KNAPP: There's a
study done by Jacob Nielsen in the '90s where he
evaluated hundreds of these kinds of interviews
to see at what point the sort of learnings
trailed off, and it turns out that
whether you interview five people or 30, by the
time you've talked to five, you'll see 85% of
what you'll ever see. And so you're better off--
that's the point of diminishing returns-- you're better off
then turning your efforts to doing a new prototype and
fixing it or changing it. And so we've also
seen that anecdotally in our own experience that,
again, by that fifth interview, it starts to be
repeated information. JOHN ZERATSKY: We both heard
a great metaphor the other day for this. Imagine if there was
a piece of carpeting here that was kind
of flipped up, and people were coming up to
ask us questions or something, and 20 people were going to
come up and ask a question. And the first two people tripped
on the piece of carpeting. Would we need to watch
the other 18 people all trip on the carpeting? We wouldn't need to. So there are some
things like that that just become so obvious
that need to be fixed or things that are
working well after even a couple of customers
that five really is a magic number for
customer research. JAKE KNAPP: In the sense
of being a complement to large scale data. Either later on or before,
you have that large scale data that can tell you something
different about it, but five works well. And because it's so important
to have the team there, to have everyone
see that research, we really think that
you're better off, if you don't have time to
do that realistic prototype that fifth day when you test,
you're better off not doing a sprint at all, because it
might be a sign that what you're working on is not
important enough to the team. And so in that
case, you might want to wait until it is
the important thing, and you're willing
to do it right, because you'll get so much
better results by going all the way through. CAROLINE O'CONNOR: How important
is it to test three ideas and evaluate them against each
other as opposed to maybe just testing one idea? JAKE KNAPP: We don't
always test three, but we often do, and the
reason why it's valuable is because-- actually,
there's a couple reasons. One is that if you know
you have two or three, you can take bigger risks. So you're less likely to try
to make this really tough call between two promising
directions or to try to sort of water them down
or merge them into one idea. The other thing is
that frankly, we've done this enough times that
we know how often we're wrong. We know how often founders,
anybody who's a decision-maker is wrong. It just turns out that
humans are unpredictable, and when you show
them this new thing, it's really hard to tell how
they're going to respond. And so you have just
increased your hit rate. If you've got two or
three different things, it turns out that the
chances that one of those is the right one is
much, much higher. JOHN ZERATSKY: In
the Savioke story, there were three
different ideas, but they were all packaged
into one prototype. So a lot of times,
that's what we'll do. We thought it would
be weird if there were like multiple robots making
deliveries in your hotel room. But if it's something-- JAKE KNAPP: Here's
your toothbrush. JOHN ZERATSKY: If it's a more
conventional type of product, if we're talking about like
when we worked with Blue Bottle Coffee, so we were
helping them figure out how to expand their
business online, how to bring the
experience that they created for their customers
in their stores to the web. We had a lot of different
ideas for how we might do that, and what we did in that case was
we took those different ideas, and we created three
separate prototypes, and what it looked
like to the customers we tested with were three
different websites where you could buy coffee. JAKE KNAPP: Three totally
different coffee companies. JOHN ZERATSKY: They even
had different brands. We made up fake logos. They had different
color schemes. And so that's a
really powerful way to test when you have ideas
that are completely in conflict. With Savioke, the face and
the games and the dance were not in conflict, so
we could put them together. But if you have ideas
that are in conflict, these multiple competing
prototypes with fake brands, they create that
illusion for the customer that they're looking
at real stuff, and they're just sort of
reacting and thinking out loud and telling you, I
really like this one, because it seems I don't know
if I can trust it or whatever. You're sort of
watching them shop between these different options. CAROLINE O'CONNOR: Thank
you, guys, so much. This was really,
really interesting. JOHN ZERATSKY: Thanks,
everyone, for coming. Thank you. [APPLAUSE]