JEFF GOTHELF: Good morning. Thank you guys for being here. What I want to talk about--
we have an hour together-- actually, a little less. We're starting a
few minutes late. So I want to get right to it. What we're going
to talk about today is how to build successful
teams inside digital product organizations like
Google and others, and how to do so focusing
really on a variety of different things, including
evidence-based decision making, customer centricity,
and really great design. And to start off, I don't know
how much you know about me, but I want to share a
quick personal story to start off the conversation. 19 years ago, I graduated
from undergraduate school, and my first job
was with the circus. I ran away. I graduated on Saturday. Monday morning I
joined the circus, and spent the next six months on
the road with this circus right here-- the Clyde Beatty
Cole Brothers Circus, which at the time was the largest
three-ring tented circus. I spent six months on
the road with them, traveling up and down. It was an I-95 Circus
is what they called it. And I saw the circus. We worked every day-- no days
off-- two shows every day, three shows on Saturday. So over the course
of six months, I saw the circus 400
times in a row-- note for note, word for word. And needless to say,
my children will never be going to the circus,
at least not with me. The interesting part, too, is
I was driving home actually the other day. I was in the city. I live in New Jersey. And there it was. I hadn't seen it in
probably in 19 years. And it was set up on Route 80
just outside of New Jersey. And it just looks a
little, slightly different, but it smelled the same. I went by there to see if
anybody I recognized was there. And over the course of the time
that I spent in the circus, I met a lot of
interesting people. One of the interesting people
that I met during that time was Steven Tyler. That's Steven Tyler right there. And that's actually me
next to him 19 years ago. Time's been a little rough to
my hairline in that time frame. That was really interesting. He would come to the circus,
and we all got to meet him. He's done well. He's preserved himself rather
well over the last 20 years. He's got that secret formula. Another really
interesting person that I got to meet while
I was there was this guy. That's the Human Cannonball. And in case you
can't see him, that's him at the top being
loaded into the cannon. And in the circus, there
was a lot of downtime. We worked two shows every
day, but once the last show was finished, you
didn't have anything to do until the next day
when the next show started. And so there's a
lot of downtime, really no internet in 1995,
certainly not mobile internet. And so you get to talking. And one of the questions
that I found fascinating and I wanted to ask
was, how does one become the Human Cannonball? What's the career
path to ending up in this particular
position at some point? So I asked him, and
he told me the story. And the story that he
told me is fascinating. And as these stories
tend to do, they start with the previous
Human Cannonball. That's how these
stories have to start. And so the previous
Human Cannonball had this job before this guy. And the way that
this trick works is it's not a real cannon
in the sense of there's no fire or anything. There's a spring-loaded
mechanism in there. The guy slides down. The Ringmaster hits a button. There's a puff of smoke,
and then he shoots out. It pushes him out of
the cannon, and he lands on the other side of the
tent in a net of some kind. Now the way that they
knew where to set up the net with the previous
Human Cannonball was they had a dummy-- a mannequin
that weighed the same as the previous
Human Cannonball. And what they would
do is, they would drive this red
truck into the tent. They would aim the cannon. They would load
the mannequin in. They would fire it, and they'd
kind of see where it landed. And they would put
the net up there. And we did that
every other night, because we changed
venues every two days. One night, we were running
late-- or the previous Human Cannonball-- I wasn't there that
year-- they were running late. And they got to the
next place late, and they didn't have time to
test where to put the net up. And so they got the truck in
the tent, and they aimed it, but they left the
dummy outside overnight and it rained that night. And the next day,
assuming everything was exactly the same, the
previous Human Cannonball and his team loaded the
dummy into the cannon. They fired it. They saw where it landed. They put the net up. And that afternoon, in
front of 4,000 children, the previous Human
Cannonball waved goodbye to all those kids,
loaded himself into the cannon. They fired him off, and
he sails way past the net. All right, tragic story. He doesn't die. He ends up severely
injured, as you could imagine,
from such a flight. And he goes back to
Florida where all circuses come from-- in case
you didn't know that. That's where they're based. But that's also
where they come from. And he sees his pool guy. His pool guy is this guy. And he says, hey, would you
like a job upgrade from pool guy to Human Cannonball? You only have to work
four minutes a day, and here are the risks, though. And the guy took the
gig, and so that's how this guy ended up
as the Human Cannonball. Why am I telling you this
story, besides it just kind of being a fun story to
tell and a fascinating look into this weird subculture that
I was a part of very briefly? The reason is this-- the
Human Cannonball's team-- the previous Human
Cannonball's team-- made the same assumptions every day. Everything's exactly
the same, so we should work exactly
the same way, and assuming that the
results that we get will be exactly the same. But as it turns out, the
assumptions that they did make were wrong on that one day,
with very tragic consequences. And the same thing is true
in the world of software these days. The way that we've
built software in the past, and the way that
we're building software today, and where it's
headed are changing. And the assumptions that
we're basing the way that we build digital
products don't work in today's realities in the
way that we're moving forward. And the reason
for that-- and you guys should know this
really well here, obviously, is that software's eating the
world-- every business that's of scale in the 21st century,
every business that seeks to scale in the 21st
century, at its core is a software business. It's a digital business. And this became really
evident recently with the leaked New York
Times Innovation Report. I don't know if you guys
had a chance to read this, but this is a
fascinating document. It's an internal audit
that "The New York Times" ran for six months to see
why Buzzfeed and the Verge and Vox and Vice Media
were eating their lunch. And at the heart of
this-- at the root of all this 90-page
document-- was the fact that "The New York
Times" saw itself as a journalistic
organization that happened to deliver content
through a digital channel, amongst other channels. Whereas the recommendation
was to flip that on its head, and they needed to think of
themselves as a digital company that happened to do
really great journalism. And that's really
where the disconnect is for this company, and
a lot of the companies that I work for. And we're seeing this
across the board. We're seeing it at
FedEx, for example. What business is FedEx in? You could say it's logistics,
but in reality, it's the software of logistics. It's the running of logistics. And I think in years
past, we've seen this. As software comes into play
into existing industries, the typical reaction
from those industries is to push back, right? It's to say, this is not the
way that we'd like to work. Software is not
going to disrupt us. And there's no way
that we're going to sell our music in
this particular way, or use this
particular technology. In case you're curious, that's
Lars Ulrich from Metallica, who was really
loud, and very, very present in the fight
against Napster when Napster first came out. There was no way
that he was going to allow Metallica's
music to be sold that way. Surprisingly enough, Metallica
is now available on Spotify. So software inevitably
eats the world. We're seeing this again in
the world of cars, with Tesla. Tesla's trying to sell
directly to consumers. And what is the automotive
dealership lobby trying to do? Litigate them out of existence. They're suing them
over and over again, so that they can't
bypass the dealership and change the way that
we sell, and upgrade, and service our cars. Netflix is another
great example, right? No one ever thought
when Netflix came around that we'd re-think the
way that we consume media, much less produce it. And yet today, try to
find a Blockbuster that's still in business, right? Now what's interesting
is that when you're making products
in the physical world, and you've got
this assembly-line model of production,
it works well when you know exactly what the
end state is going to be, how it's going to be used, and
what it's going to look like. And that's great if you're
making cars, or TVs, or even cell phones. But what we've done is
we've taken this model, and we've applied
it in many companies directly to software,
where there's become this kind of
assembly-line model of software development. Someone has to define it, and
then someone has to design it, and then someone has to develop
it, and test it, and deploy it. And what's interesting
is that in software, we don't know what the end
state is going to be. In fact, I would
argue that there is no end state to software. And I would also
argue that we really don't know exactly how
our customers are going to use the things that
we put in front of them. And so using this model
starts to break down. And it starts to break down for
this reason-- I think again, you guys know this better
than most companies-- that software has
become continuous. There is no end state. We just simply keep improving
it, optimizing it, iterating it, moving it forward. There is no finished product. There is no big release. There's just
continuous deployment and continuous release
to our customers. And then ultimately, from that
comes continuous learning. What I think it's
fascinating is this-- and I'm curious how
this compares here-- this is how often Amazon
pushes code to production. Some customer
somewhere sees new bits every 11.6 seconds at Amazon. That's staggering. That's amazing, right? Five times a minute, they
are taking tiny risks. They're trying new things,
and then they're listening. They're sitting back,
and they're listening. They're sensing the
response from the market. Did they like it? Did it change their behavior? Did it get them to do
something differently? If it did, terrific. Let's scale that out. Let's optimize it. Let's design it better. Let's make it easier to use. And if it didn't, let's
learn why and roll it back. Now the impact, of course,
is that the learning is huge. If you get it wrong and you roll
it back, the failure is tiny. And so taking these
tiny risks allows them to develop this sensing
layer with their customers. They're continually
having a conversation with their customers,
so that they're able to sense very
quickly, and then in theory, they can
respond equally as quickly. This is working well. Let's scale it out. This is not working well. Let's find out why, and
then let's try it again a different way. Now the interesting thing is
that insight has two sides. We can't simply just
measure, and assume that we know why something is
working or not working, right? That's the quantitative
side of things. Insight also has a
qualitative side. We have to understand why people
are actually behaving this way. What are they doing is
the quantitative side, and why they're doing it
is the qualitative side. And together, we can
build a 360-degree view that allows us to react
very, very quickly. By comparison, this is where I
used to work a dozen years ago. Our feedback cycle--
just to give you a sense-- Amazon gets feedback,
let's say five times a minute, potentially. Our feedback loop at AOL
12 years ago was 12 months. We would work for six
months to get every bit of code and design
perfect, because we had to print 20
million of these, and then send them out
to people in the mail. And that's how people
consumed software. So six months of work to get
it done, ship it in the mail, wait for people to install
it, see how they use it, six months to collect data,
and then to ship again. Right? It's a 12-month-feedback
loop, whereas today, five times a minute you
can get that feedback. Back then, it made
sense to try to get it as perfect as possible,
because again, you're making 20 million copies
of this particular disc. Today we can update that
much more efficiently. And because of that, these
industrial-era management tactics don't work with
continuous software. We can't manage an assembly
line and optimize a production process, because
we don't know what that end state needs to be. We have to continually
figure it out, sense it, and respond to it. And so as makers of
digital products, as makers of software working
in this continuous world, there's a few questions that we
need to answer for ourselves. The first is this-. What does this continuous
world mean for the way that we build products
and organizations? It's not enough to just change
the way that we build products. We have to build the
companies and the teams around those products that
can work in this reality, and that can do their best
within this new world. Right? The second question is this. How can we take advantage
of all this information? We've got all this
information coming in, quantitative data after
quantitative data, assuming you're doing the
research underneath it as well. You've got qualitative
and quantitative. How do we actually build a
team and an organization that can take advantage
of that information? And then lastly, how can
we maximize the creativity, the learning, and
then ultimately, the productivity of
our teams in an effort to do their best possible
work, to build the best possible products
for our customers? And so in order
to do so, in order to survive in this
continuous world, we have to build a
culture of learning. That's the key here. We have to change the mindset. It's not enough to
just deliver software. We have to actually deliver
learning on a continual basis. That's what the
Amazon teams get to do with 11.6 seconds of release. They're building a
culture of learning. My friend Dave Gray
wrote a book about a year or two ago called "The
Connected Company," and it's a terrific book, and
I highly recommend you read it. Now in that book, Dave talks
about two different types of corporate strategy. The first is
deliberate strategy. Deliberate strategy comes
from the industrial-era mode of thinking, where it's
the executive suite that has all the ideas. That's where the creativity is. We'll come up with everything
that you need to do, and then we'll give
it to you guys, and you can go
forth and execute. And that's the reality. And what happens in that
is really interesting. I'll show it to you in this
really short video that I've got in here, about
what actually happens when you take deliberate
strategy to an extreme. And that's Johnny Depp as Willy
Wonka illustrating my point for me. [VIDEO PLAYBACK] [WHIRRING] [DING] -Do you mean that's it? -Do you even know what it is? -It's gum. -Yeah. It's a stick of the most
amazing and sensational gum in the whole universe. Know why? Know why? Because this gum is a
full, three-course dinner all by itself. -Why would anyone want that? -It will be the end of all
kitchens and all cooking. Just a little strip of Wonka's
Magic Chewing Gum, and that is all you will ever need at
breakfast, lunch, and dinner. This piece of gum happens to
be tomato soup, roast beef, and blueberry pie. [END PLAYBACK] JEFF GOTHELF: In most
cases, deliberate strategy yields a situation where you end
up with a customer asking you, why would anyone ever want that? Which is the worst
possible thing you could hear when
you actually put your product in
customers' hands. The second worst
thing you could do is exactly what he
does there, which is fall back on
your marketing notes to justify all the
assumptions that went into making that product. It will be the end of all
cooking and all kitchens, as if people don't
want to cook, and don't want to spend time
in their kitchens. And again, at the
beginning when you set out to solve a business
problem or a customer need, we have no idea what that end
state is going to look like. I think this is a
perfectly good example. You could walk past these
two hippies 40 years ago, and never think, never know
what the end state was actually going to be for these two
particular individuals. Right? And again, that's the same thing
with the products that we make. And so what's the alternative? The alternative, of course,
is emergent strategy. It's different and it's organic. It lets companies learn
and continually develop new strategies based on this
ongoing culture of hypothesis and experimentation. It takes a bottom-up
approach that says, the people who are closest to
the customer, the people who are making the
product, probably have the best ideas about
how to solve this. We need to build a team that
allows them to try this out. And we need to build an
ecosystem around them that allows them to run these
experiments while not crashing the system as a whole, so that
we can keep doing business. That's where that emergent
strategy comes from. And again, we hear this from
a variety of executives-- Amazon-- you need to set up
and organize so that you can do as many experiments per
unit of time as possible. Eric Schmidt calls them at-bats. He's talked about it
at Google, about how we've got to take as many
swings as possible at something to make sure that
we get it right. And so in order to
build a culture that supports this way of
thinking, of learning, of experimentation,
we have to think about the structure for it. And the structure
for it is the team. The atomic object of a culture
of learning is the team. Right? That's the smallest
step we want to go. We don't have a
particular rock star that can answer all
of our questions. We want to build these teams. And the question becomes, then,
how do we build these teams? What makes up an
innovative team, and a team that can take advantage of
this culture of learning? And we'll talk about
three different things. The first is the
anatomy of the team. What's the makeup of the team? The second is, how
do we task the team? How do we tell the
team what to do? And then lastly, how
should the team work? Should they work in
an agile process, in a waterfall process,
a lean process, whatever you want to call it. OK? Let's start with the
anatomy of the team. What is the makeup
of the team itself? And to do so, I'm going to start
with a series of anti-patterns, how not to build
innovative teams. The first is this. Putting people in silos breaks
up the communication cycle. It isolates team
members, and forces them to communicate with
written documentation. Everybody feels
like a line worker. I'm going to do my thing, and
hand it off to the next person. And because of that, no one has
ownership of the whole thing. Everybody feels like
a service provider. Right? It just passes
through my department. I add my coat of paint on my
assembly line, and it moves on. No one has a sense of the whole. What are we building? Who are we building it for? What's the problem that
we're trying to solve? What does success
actually look like? How does this fit into
corporate strategy? Why are we even working on this? And worse, there's no
collaboration that takes place. And collaboration really is
the secret sauce of innovation, right? That's where learning comes. That's where innovation
comes from, the building off of each other's ideas. And I know this firsthand. Actually, just down
the street from here, I used to run a UX team at a
company called The Ladders. I worked there for about
four years starting in 2008. And my job was to come in there
and build a design practice. And the first thing that
I did was I set up a silo. That was our silo. And then every time somebody
needed work from us, they would come to us. And I played the traffic
cop in the center. Every time-- we need some design
work-- I provided some insight. Product management said,
hey, we need some work. I said terrific. Bob's got some bandwidth. Let's give it to Bob. Engineering said, we've got a
little bit of work over here. Terrific, Bob's got
some more bandwidth. Let's give it to Bob. Marketing says, hey, we
need the website spruced up. Terrific, Bob's got a
fraction of bandwidth left. Let's give it to Bob. Now Bob's supporting three
different projects a day, and every day he's coming
in deciding which two people he's going to piss off today. Right? Because you can't have
three number one priorities. You've got to pick one
thing and move on to it. As soon as you build that silo,
people do just enough work to move on to the next
thing, and the next thing, and the next thing. And just because design
is in the center here, doesn't mean that this
is unique to design. Engineering could be the center
here, or product management, or marketing, and so forth. Right? We've got to get people
distributed differently. And so the question
then becomes, what team structure
facilitates this culture of innovation and learning? Four qualities. The first is this. Small. Keep your teams small, six
to eight people, roughly. Again, I quote Jeff Bezos a lot. I like what he's doing. What he calls a two-pizza team,
a two-American pizza team. It's worth quantifying. That if you can't feed your team
with two pizzas, it's too big, right? Six to eight people. That way you know
who everybody is. You know who to go to
when you need something, and there's amazing
accountability. There's nowhere to hide. If there's six people on
a team and someone's not doing their work,
you know very well who's not doing their
work right away. Second, take those six to eight
people and co-locate them. Put them together in one
place, not in the same campus, but in the same area so they're
sitting next to each other, so they can talk to
each other, so that they can reduce the feedback loop
from each other as quickly as possible. And the reality is,
I know for you guys, you've got multiple offices. Where I work, we've
got multiple offices. If you have to work
with a distributed team and you want to build
this hypothesis-based, experiment-driven approach
to building products, you at least have to be
awake at the same time. You've got to at least have
some overlap during the day. I have colleagues in Singapore. They're fantastic people. I never get to work with them. It's a 12-hour time difference
between here and there. So we've got small, co-located,
and next-- dedicated. Everybody's working on
one project, this project specifically, right? That's what we want
people working on. As soon as we steal somebody off
to work to fix a bug somewhere, or to work on an
executive pet project, they're not working
on our project. And we're a small team, and
so we're waiting for them to come back, and we can't
move until they come back. And the context
switching is costly. It takes time to ramp
down and to ramp up to each one of these ideas. Small, co-located, dedicated,
and lastly, self sufficient. The team should be able to
do whatever it needs to do. If it needs to write code,
someone can write code. If it needs to design,
someone can design. Research, someone can research. Product management
decisions, somebody needs to be there to make that. It doesn't necessarily mean
that you need representatives from all of those disciplines. You just need the
competencies on the team, so that the team can make
those decisions at the pace that the information comes in. Remember, we're seeing
insight and information come in super fast. The team has to be capable and
empowered to react to that. So those are the four
qualities of a team that values learning and innovation. So then the question
becomes, how do we tell the team what to do? How do we task the team? This is certainly a
viable option up here. You just yell at
them for awhile. And I've had bosses who
have tried this strategy. It works temporarily, until
someone quits and finds a better job. But in reality, what are
some ways to task a team? And once again, I'll
share some anti-patterns. My first and my
favorite anti-pattern is the roadmap, the
product roadmap. I don't know if you
guys make those or not. I've made a lot of
them in my life. And look, they're
compelling documents. Right? They tell a great story. We're here, and we're
going over there, and there are five steps
between here and there. And this is what's between
us and the deadline. And this is when we'll launch. The end comes at
this particular date. And so we convey this
idea that we fixed scope, and that we fixed
the deadline as well. Now I don't know about
you, but in the 17 years that I've been building
software products, every time we fixed scope and
fixed time, one of three things has happened. We've moved the deadline,
we've reduced scope, or we've worked in crunch
mode for the last three weeks of the project-- 80 hour
weeks-- to get it done. And then we all
resign at the end and go find jobs somewhere else. In reality, road
maps look like this, because again, there's so
much unknown and so much complexity that goes
into software development that we simply can't
predict exactly where these challenges
will come up. Something that was easy
turns out to be hard, right? There's a new technology
that's entered the marketplace, and it's radically
shifting the way that we actually
deliver our service. We have to be able to
bounce around like this. And that's the reality. We can't simply predict
exactly when we'll be done, and exactly what it will look
like when we get done with it. This is how Gilt Group does
it from a recent blog post. They say they don't
maintain a roadmap document. Their road map is simply the
sum of active initiatives. Initiatives are business goals
that they're trying to achieve. And every so often, they
revisit the prioritization of those business goals. Are we making progress? Is it still worth it
for us to continue pushing in this
particular direction? Right? No specifics about exactly
what will launch and when. And we've seen this from
the authors of "The Agile Manifesto." This is one of them. This is Kent Beck and he
tweeted this a while back. And he said, look,
product roadmaps should be lists of questions,
not lists of features. Because when you have
these product run-ups that are lists of features, we're
incentivizing our teams to deliver output,
to deliver features. And when we deliver
nothing but features, we end up with product bloat. Look, this is the Microsoft
Word version of a guitar, right here. Right? 95% of this guitar
is useless to 95% of the people who actually
pick this thing up. This thing was specifically
created for Pat Metheny, and really only he
can really do justice to the entirety of this product. The rest of us could probably
make out a few of the notes on that long neck
right in there. But this is what we
get in product suite after product suite, when we
incentivize our teams to launch features, to generate output. And that's what roadmaps do. They drive us there, right? At the end of the day,
shipping a product is not a measure of success. Too soon? Sorry. Simply launching the product
is not a measure of success. The reality is, did we
change customer behavior in a positive way? Again, as we push
features in, features out, all we end up with is
feature bloat, and products that should do things very,
very simply and very cleanly. Now interestingly enough,
most companies currently manage to output to features,
to getting out the door, because it's easy. It's binary. Did you ship it? Yes we did. Terrific. You get to keep your job,
get a raise, get a bonus, whatever it is. Did you not ship it? Terrific. You get fired or reprimanded
or whatever it is. Instead, we should
be tasking our teams to achieve business outcomes. Business outcomes are
a measurable change in customer behavior. It's an objective
measure of success. Did we get customers
to click on more ads? Did we get customers
to come back more often to the Search Results page? Did we get them to
come back less often, because we're giving
them the right results? Those are measurable changes
in customer behavior. And that's how we
should task our teams. We give them a problem to solve,
not a solution to implement. Now the interesting
part is this gets messy. It's a big can of worms. It's not a binary thing. If you task your teams
to get more people to sign in successfully
into your product by 50%, and over the course
of six months, they increase it by
32%, did they fail? Did they succeed? It's not as clear. It's messy, so it's more
difficult to manage, and many companies
stay away from this. Instead, we have
to get granular. We have to give teams these
outcomes that they can directly attribute to the work
that they're doing. Right? Decrease shopping cart
abandonment by 15%. Did the work that we
did actually do that? Right? Increase the number of times
a customer visits each month. That's the kind of
success criteria we want to give people, and
then let the teams figure out how to actually
achieve that goal. Interestingly enough, I was on
a project over at The Ladders just like this, just
to give you an example. I was tasked with
moving a metric called Job Seeker Response Rate. Job Seeker Response
Rate was the number of times a job seeker responded
to an employer contact through our system. The team that we put together
was a small, co-located, cross-functional team, and their
task was to move this number-- which was our current Job Seeker
Response Rate, 14%-- to 75%. That was it. No further instructions
were given to that team. Your job is to get
that number to 75%. Go figure it out. Right? And so we got together. And we brainstormed. We did all the things that make
up brainstorming and innovation activities-- Post-It
notes, and white boards, and all of those things. And we tried a lot of
things, and we experimented. And at the end of
three months, we had managed to get
that number up to 38%. And we went back up in
front of the organization, and we said look, you
funded us for three months. We've moved the
number from 14 to 38%. Here's what we've learned. If you fund us again for
another three months, this is what we're going to do. And the organization
said, terrific, go ahead. Do more Post-It
notes, which we did. We actually shipped code,
believe it or not-- features, and designs, and so forth. And at the end of six
months, we got to 68%. We didn't hit 75. We hit 68. The company said, you know what? That's good enough. It's close enough. We are going to move
you to the next project. And we built whatever features
moved the needle appropriately, and could scale
with the business that we were moving forth. And what that allowed
us to do, by having those objective
measures of success, we could make decisions based on
objectivity, on observing what and why customers are doing. And when you do that, you
collect a bunch of data. And when you've got
enough information, you've got three
decisions to make, right? The first, if the data says,
hey, this is a bad idea, you should kill this idea. This is a bad idea. Don't do this. OK? Kill the idea if the data
says it's a bad idea. Maybe you get some data back
that says, hey, you know what? This is a pretty good
idea, but the execution is not moving the
needle fast enough. That's when you pivot. You maintain your strategy. You change your tactics. When you find something
that works well, that's when you double down. That's when you scale it. That's when you optimize it. That's when you give
it to more people, because it's moving the
needle in the right direction, but only based on those
objective observations that you're making as you
slowly increase it now. Now look, there
easy parts of this, and there are hard
parts of this. Here's the easy
part-- measuring. Measuring is easy. It's just instrumenting
the code in such a way that you're actually
collecting this data. Talking to customers is easy. Right? It gives you the
qualitative insight into why customers are
behaving in certain ways. Right? And then reflecting,
coming together into teams and saying here's
what we've learned. This is what's happening. This is what
customers are doing. Here's why they're doing it. These are the easy parts. Here's the hard part. Changing course,
actually saying, you know what, yeah,
we've gone down this path for three months. The numbers are
coming back and saying this is the wrong
path to go down. I know, I know, but
the executive really wants this to launch. Or wow, we spent three
months working on it. What are we going to
do, just throw it away? Yeah, that's what
you're going to do. You're going to throw it
away, because it's not meeting your
business objectives. The hard part is having the
organizational and the team maturity to say, this
is the wrong direction. Let's change course. And let's move on
to something else. And again, from this blog
post over at the Gilt Group-- I thought this was really
interesting-- they define two types of KPIs-- key
performance indicators. They've got high-level
strategic ones that the company cares about
for an extended period of time. And then they give teams
these tactical KPIs to achieve that they
know are leading indicators of these
strategic KPIs. And the teams work on that
until they hit those numbers, and then they move on. That's the tactical, that's the
business objective to learn. And we have to build
an organization that takes advantage
of the small team, and that allows them to come
up for funding periodically. It's almost like
a start-up, right? You've funded us for
this amount of time. You've given us this objective. After that amount of
time, you can say, look, this is what we've learned. Will you fund us again? And the organization gets to
make an evidence-based decision about whether or not it's
worthwhile to actually put more money into this
particular path. The last thing I want
to talk about is this-- how should the team work? We talked about the
makeup of the team. We talked about how
we task the team. And the reality is,
what should the process be-- the final things. How should the process be? How should the team work? And then once again,
we're going to start with a few anti-patterns,
how the team should not work. So first and foremost, a lack of
cross-functional collaboration really starts to break
down a culture of learning and a culture of innovation. This is a bit of an excuse to
put more circus photos up here, but just to give you a sense,
those are the elephants, obviously, on the end. The women in the middle walked
on large white balls up ramps. That's what they did. And that's Charlie. He served our meals,
three meals a day, if you were brave enough
to eat food from Charlie. And most days, we were. Some days were a little
sketchier than others. Now look, there was no
cross-functional collaboration in the circus. In fact, the silos
were so deep there were interdepartmental silos. The clowns fought each other
over who originated a gag. I mean, can you imagine
if the Human Cannonball-- and we had a tiger trainer. We had 10 tigers. If the Human Cannonball and the
tiger trainer collaborated-- the guy's already
flying across the tent. If there were 10 tigers
underneath there, what's the difference,
ultimately? But at the end of the day, the
excitement of that act is 10x. None of that ever
happened, because everybody felt like they would be giving
up something unique if they let go of their originality
and collaborated with somebody else. Another anti-pattern is
a fixation on job titles. We get so hung up on what
it says on our job title. Oh, you're the designer. There's no way you can code. You're the engineer. There's no way you
can design, right? And thinking about that
greatly limits the contribution that our small team
members can make to the process of
learning and innovation. To drive the point home,
I want to share with you this photo right here. Does anybody know
who this guy is? Any guesses? Take a guess. AUDIENCE: Nigel from Spinal Tap? JEFF GOTHELF: It's not
Nigel from Spinal Tap, but that's a great guess. AUDIENCE: Gene Simmons? JEFF GOTHELF: It's
not Gene Simmons, but you guys are in the
right era, for sure. AUDIENCE: Randy Rose. JEFF GOTHELF: It's
not Randy Rose. That's a really good one. I never heard that one before. That's a good one. All right, I'll give it to you. This is who this is. This is Jeff "Skunk" Baxter. Jeff "Skunk" Baxter was a
founding member of Steely Dan, and he was a Doobie Brother
for 35 years-- legendary American rock-and-roll
guitarist. Any guesses on
what he does today, other than playing nostalgia
concerts at state fairs? AUDIENCE: Human Cannonball. JEFF GOTHELF: Human Cannonball. That's a good one. UX designer? No, he's not a UX designer. This is what he does today. Right? I'll let you read
it for just a sec. Mr. Baxter is a consultant
to the US Government on missile defense. That guy. For better or for
worse, that's the guy. OK? Point here is, if he
showed you his resume, and it said, listen-- founding
member of Steely Dan, Doobie Brother for 35 years,
would you let him within a mile of a conversation
on missile defense? And of course the answer is no. We would never let him
do that, because we get hung up on that job title. Again, people have secondary
competencies and passions, and they're really
good at other things. Let's not limit
their contribution, especially in the early
stages of a product, during product discovery
and conception. Inevitably, people
will fall back to their core competencies. Engineers will write code
and designers will design. But especially in the
early stages of a product, let people contribute in
whatever way that they can. Other anti-patterns. A fear of failure, right? My ass is on the line, right? Right? If I don't get this right,
something's going to happen. I'm going to get fired. I'm going to get demoted. And we create these
cultures, but we never consider the scope
of the failure. Think about that Amazon
scope of failure. If they get 11.6 seconds
worth of a release wrong, that's terrific. They've learned something. And they can roll it back, and
they can move that forward. If we take small risks, then
the failure brings learning. That's the experimentation. And that's what we should value. We shouldn't make people
afraid of actually getting something wrong. That should be, ultimately,
encouraged, right? Another is arbitrary deadlines. If you owe Paulie money,
you got to pay Paulie. Right? That's kind of how it works out. Arbitrary deadlines
are typically just motivational tools. There's usually no
real reason to get something done by
a certain date. And we talked earlier about
what happens when you fix date and you fix scope. If you fix the
date, that's fine, but let the teams
build the best product that they can by that
deadline, and then let them continue optimizing
it after the deadline passes. What ends up happening here,
is the teams end up building this CYA culture, where we do
just enough to get things done, and then to move things
forward to the next thing. We never do our best work. We never take risks. We never innovate, because
we're afraid that we're not going to hit the
deadline, or that we're going to get fired,
and so forth. And so the things
to think about is, how does a culture of
learning change the way that a team works? Ultimately-- So we talked
about who's on the team and how we task the
team, but how do we actually get them
to do their best work? And there's a couple of
different ways to do this. First and foremost,
people take smaller risks. Right? That's the first tack. This could essentially
be the-- you could argue this is the MVP
for the GoPro camera, right? Let's get that out there. Let's try it and
see if it works. Let's see what kind of
product we get out of it. There's a key principle
in lean thinking that says that we're always
moving from doubt to certainty. That between us and the
end state of perfection, which we never
attain, there's a fog. And we can only see two or
three steps into that fog. And so to mitigate the risk of
running in the wrong direction, we take small steps. We take small risks. We run experiments. We test our hypotheses. And if the results
from that experiment come back and
confirm our thinking, we keep going in that direction. But if the results come back and
they say that was the wrong way to go, we roll back, or we
pick a different direction. Because we don't want
to run head-first into the fog assuming that
it's the right way to go. We take smaller risks. This was awesome. I ran into this. This is Map My Run. You guys know this app? I was running in LA this summer
while I was working out there. Early in the morning, you can
see the dates, the time stamp, so you know I'm not lying. 6:54 AM I was
already out running. And I got to the
end of Venice Pier. Venice Pier is beautiful. It goes a half mile into the
ocean, and the sun is rising, and the palm trees are swaying,
and the surfers are out. And so I paused to take
a break and kind of take in the scenery. And I stopped the
tracking of my run, and this beautiful modal
window comes up and says, hey, would you like to take photos
and add it to your run? And at that moment, I was
feeling particularly inspired. Adrenaline's flowing,
and I'm sweating, and I'm breathing heavy. And I said hell yeah. I also liked the fact that
they used the term MVP, coming from the lean world. They mean most valuable
player, of course. But I said, terrific. Go MVP. And I tap Go MVP, and they're
like, all right, great. Give us 30 bucks. And you can totally do that. And I was, like wah wah. No way am I making a rational
purchase decision at 7 o'clock in the morning, wheezing, can't
breathe and eyes are burning. But, so the end of this workflow
is kind of crappy, right? But the beginning--
none of these features here ever have to exist. Right? All you have to do
is put up that modal and count the taps on
Go MVP and No Thanks. And when you get
enough that say yes, I like this-- that's
when you start investing in actual
feature development. Right? That's a small experiment. That's a smaller
risk that you can take to see if there's any value
in actually investing further in building this
particular product. Right? Because the agile
world has come up from an engineering perspective. The 17 writers of "The Agile
Manifesto" were engineers. And typically agile--
in most organizations that I deal with-- comes from
the engineering department. It's very rare that
the designers are like, let's do agile. And the engineers are like,
no, we don't want to do it. Let's do waterfall. Right? What's happened is that we built
these amazing organizations, these amazing software
engineering organizations, that are incentivized for
the efficient delivery of high-quality code. But what the agile
process doesn't have-- as my friend Bill Scott
likes to say-- is a brain. It lacks the capacity to
determine what to build, and how to implement it,
and how to design it, and what copy to
put on top of it. And this experimental,
hypothesis-driven approach, where we take small risks
and then we learn from it, allows us to drive the
agile process in such a way that we get great code and
great design and great copy that customers
actually want to use, because we've measured
their behavior. That's our definition
of success, is the change in
customer behavior. And we have to remember
that works as designed is simply not good
enough anymore. So one more. I just want to prove to you
how healthy and athletic I am, so I'm going to use one
more athletic story here. But this is a picture of my gym. I live in New Jersey. This is a picture of my gym. This is my favorite
feature at this gym. It's called the cardio
theater, and essentially, it is a movie theater with
cardio machines in it. That's it. So instead of putting your
headphones on and watching a movie on a small
screen, you got on the elliptical
or the treadmill, and they blast the movie
out at you in the morning. I like it. I love that feature, in fact. I usually go in the mornings,
so I arrive around 5:30. 4:30 in the morning, the
guy comes and opens the gym. He turns the lights on. He pushes Play on the movie. He sees that there's
a picture and he hears that there's sound. As far as he's concerned,
works as designed. And then he goes back
to the front desk. I show up at 5:30, along
with half the population of northern New Jersey. For some reason, everyone's
awake at that hour. And as soon as you get more than
two people back here running on a treadmill, you can't
hear the movie anymore. Unusable, undesirable,
works as designed. I have to get off
the treadmill and go to the front every single day,
and tell the guy to turn it up because we can't hear the movie. So we have to ensure that
just because a feature works as designed, people actually
want to use it and can use it. And the way we do that is with
clear definition of success. Can we change customer
behavior in the right way? And we can measure that. And we use that as the mile
marker for whether or not we're building the right things. Now again, we want to promote
competencies over roles. We want people to do whatever
it is that they're good at to help us move forward faster. And then lastly and
most importantly, we want to promote
team self organization. Give teams a little
structure, and then let them figure out how
best to work together, and how best to achieve
the goals that you've set out for them. This is a quote from
Geoff Nicholson, who invented the Post-It note. He said that 3M wanted to
productize this thing with Six Sigma as soon as he
had thought of it. And he had no proof
that customers actually wanted this thing yet. We don't want to impose
a process on the team just because we think it's
good for the production of code or the production of features. Right? Let the teams figure out
exactly what they need to build, and when they need to scale it,
and how best to work together. Last thing I want to say. This is hard work. All of this, everything I just
told you to do is difficult. It's process change. It's culture change. Why should you do it? Why? Why invest in it? If things are going
OK, why do it? First and foremost, you're going
to make your customers happy, or you're going to build stuff
they actually want and can use, which means they'll
come back and use it, and they'll tell their
friends, and they'll share it, and you'll have more customers. You'll reduce waste by building
more successful products and not working for a
long time on products that don't stand a chance of success. We'll focus our people and
our time more effectively. And then lastly, this
increases employee morale. It allows people to feel like
they're part of a greater whole, there's a bigger
mission, their ideas count. They're loyal. They're passionate
about the ideas, because they are
ultimately their ideas that solve these problems. And that's terrific
for your hiring brand and for your
retention over time. At the end of day, let
me leave you with this. You have to transform from
a culture of delivery, focused on getting
features out the door, to a culture of learning. Thank you very
much for listening. I'd love to take your questions. [APPLAUSE] Questions. I know we have mics over there. Yes, sir. AUDIENCE: You give some
great examples of companies that do this, they iterate fast. They learn quickly. They try a lot of experiments,
like Amazon, Facebook, even Google. But there's a company that
writes a lot of software that doesn't seem to do
this, and that's Apple. Nevertheless, they've
been very successful. So can you explain why
they managed to get things right without seeming to try all
of these constant experiments? JEFF GOTHELF: I would argue
that they do experiment, and that they certainly do a
significant amount of research. I just don't think that
we see a lot of it. And I don't think that-- they're
certainly not public with it like the rest of the companies
that you just mentioned, like Google and
Facebook, and so forth. I think what Apple really excels
at is understanding ecosystems. And they do this by
observation and by research and by learning. And then when they come,
they take ambitious gambles at design-driven innovation
to solve the problems that they're viewing
in these ecosystems. Now you could say look,
hey, you know what? They don't test, and they
don't talk to customers. I think they do
talk to customers, and I think they absolutely
build prototypes and run experiments. They seem to lose them in bars
in Palo Alto fairly regularly, right? But also take a look
at the iteration after iteration after
iteration of the hardware and the software products
that you're using for them, and you can see that
those essentially end up being experiments. I mean the original iPhone
feels like a rock compared to an iPhone 6 these days. And I think that there's
experimentation and tons of learning there. Incidentally, so I
think that's one thing. Related is there's a
Chinese company called [? Chaomi ?] that
manufactures cellphone, mobile phones, and tablets. And they're competing directly. They're trying to compete
with Apple, anyway. They do iterative
design on their hardware and their software in one-week
iterations, which is amazing. Every Tuesday they
ship 100,000 phones. People snap them up, and they're
on the phone with these people, and they're measuring
usage to understand how well the products
are meeting their needs. And so that's what I've
seen as evidence of that. Now the last thing
I'll say about this, because this question
comes up all the time, is you can't dismiss some
level of design genius. Right? You have to give some credit
to Steve Jobs and the very few people in the world
who are like him, like Jony Ive and so forth,
who have this design genius. I don't think you
can dismiss that as part of the component
of their success. But I think the design
genius, the creativity, comes in in solving
real problems. And they see those real
problems by observation and understanding ecosystems. That's what I think. Yeah? AUDIENCE: I was
wondering if you could talk a little bit about the
structuring of small teams in big organizations
such as here, where there are literally
hundreds of engineers and tens of product managers,
just in one product. So do you see these teams
forming and reforming? Do you see these teams have
cross product, cross function? So how do you imagine them work? And then related to that, what
is the decision-making process like in that structure? JEFF GOTHELF: Yeah, it's a
great, really great questions. I don't claim to
have all the answers. I can share with you a couple
of anecdotes from my experience, and something interesting that
I heard last week on my travels. So the team structure itself,
typically designer, product manager, and engineers, so
like, one, one, and then two to four engineers as far
as each one of those pods. And I think you'll see that
repeated in some of the other, like the Spotify
whitepaper that was out. I think what's interesting
is if you've got-- so taking the-- if you've
got a big problem and you've got a set of
engineers or a set of teams that are working towards
solving this particular problem, I think you give that
problem to all those teams. And I heard Mary Poppendieck
speak last night. Mary Poppendieck is-- she's
one of the legends of the Agile world. She's been around
35, 40 years, trying to make this stuff work
in large companies. And she was talking about
giving these pods, essentially, all the same problem,
and incentivizing them the same way, so everyone
has the same incentives to achieve the same
success criteria. And then they have to build
dependencies on each other. They have to work
together in order to achieve that
overarching goal. So that's one way to
kind of unite them in a similar direction. I think the question
then becomes, how do you make decisions? I've seen it work
one of two ways. And again, that's
just my experience. It's not necessarily the answer. Either there is a product lead,
typically a product manager. Although it could be
the design leader. There's a team lead
who makes a decision. And where I've seen it work,
actually, more effectively is that there's a triumvirate
of the product lead, the tech lead, and
the design lead, who kind of gather all these
ideas from the team, and then make a decision
together, and then bring that back to the teams. I've seen that work well, also. Yes, sir? AUDIENCE: I'm Interested
in constraints, things like legal or regulatory
or just bureaucratic, and where they live. So these teams
are generally kind of constraints on
innovation, as well, or least boundaries
of innovation. Should the team itself
own that constraint and possibly be
demoralized by it? Or should it be
external, in which case it becomes kind of an enemy
or a systematic problem? JEFF GOTHELF: I think you have
to work within your reality, right? So if you work in
health care, there are going to be
legal constraints and personal
information constraints, privacy constraints, that
you have to work with, and same thing in financial
services and so forth. I don't think that
you can ignore them. And I think you risk damage. Forget legal damage
for a second, but I think you
risk brand damage if you choose to flaunt those
in favor of experimentation and hypothesis. Now that being said, the
rigidity of those constraints is debatable in
certain situations, and I think that if you can
have a realistic conversation with legal department,
for example, or the branding
department, or IT, or whoever is
constraining your ability to learn the things
that you need to learn, I'm confident that
you can typically find ways to get around that. I'll give you an example. We did a lot of experimentation
with a big financial services company downtown from here. Again, 100-year-old brand, lots
of financial data-- it's risky. Options are-- you can test
off-brand, for example, so it has nothing to do
with the official brand. That's one test. You can opt people into
a beta-tester pool. So you're self-selecting
a little bit. There's some bias there. You get the more tech-savvy
individuals, and so forth. But at least they've agreed to
take part in these experiments. But there are ways to get around
this where you can negotiate with the people who
are constraining you to get some
kind of learning. I think that you
have to own it, and I don't think you can disregard
it, because ultimately whatever you come up with has to
live in that world as well. And I think that if
you disregard it, you're building something that's
probably destined to fail. Yes, sir. AUDIENCE: I also have a
question about the small teams. So the first question
is, what's the best way, in your experience,
to incentivize people to do their best
within the small teams? And second question is, when
you have the small teams, and you want to encourage them
to kind of all collaborate and work together on something,
but then in that environment, how do you assign
credit correctly so that people, again, have
incentives to kind of-- JEFF GOTHELF: Well, you
assume that taking credit is incentive. And I suppose that
that could be. The way-- so again,
based on my experience, the way that I've seen
the best incentive has been to let people
figure out the solution. There's an amazing
level-- there's an exponentially bigger level
of passion and commitment to the success of an
initiative if the team is working on an idea
they came up with. The difference between telling
a team to build something and telling them to figure
out how to solve this problem is the incentive. That's what gets
them to motivate. That's what gets
them moved forward. And the interesting thing--
if you buy my concept that the atomic object of
innovation is the team, then the team wins together
or the team loses together. There's no, the team
did great design, but the engineers messed it up. Right? There's none of that. It doesn't happen. The product is the product. You can't separate great
engineering from great design. And we managed to
achieve our business goal and the
customer's need, and it seems to be a winning product. That to me has been the
greatest source of incentive to get these teams to
work well together. OK? Anything else? Thank you guys very much for
spending your lunch with me today. That was a lot of fun. [APPLAUSE]