Why Scaling Agile Doesn't Work β€’ Jez Humble β€’ GOTO 2015

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Really great talk. I'm going to be reading some of his books in the coming months as I think that will be just as valuable as doing extra curricular training / certs / projects.

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/NickJGibbon πŸ“…οΈŽ︎ Jan 13 2019 πŸ—«︎ replies

Great talk with some great points. Just left wondering why it was called, "Why Scaling Agile Doesn't Work"?

Much of it was applying Agile (which I agree, again, might not just "work" unless you take and try applying scaled agile ideas - though many of them are non-scaled agile benefits realised at scale!

For example - knowing that 9 months of your project delivery is spent trying to tease out what projects to do is often only possible if you've got the execution process sorted. Being trusted to extend Agile to the project pipeline sometimes only comes as a result of having proven yourself in development!

You're not agile if you're not releasing regularly at any scale - both early on in the talk, but also the criticism of scaled methodologies later is questionable since they usually advocate regular system integration testing and demo!

As I say, Jez is engaging and makes great points, I might have been irked by the attend-bait talk title!

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/DMRetro πŸ“…οΈŽ︎ Jan 13 2019 πŸ—«︎ replies
Captions
well what a great introduction I think that's my my favorite introduction ever Thank You Marcus yeah I wrote this book with Barry and Joann because basically we wrote this book and it was like oh yeah that makes sense and then they try to do it and then they couldn't because of all this other stuff and so I saw people and they're like yeah we can't do it because of this and so I got all the things that they said they couldn't do it because of and wrote a book to explain why that was all and why they could actually do it so it's basically it's a book to arm yourself with when people say oh that's a great idea but we can't do it here because of X and you can say well actually so I don't know if you'll make many friends by doing this but you know that's I can't help with that so if you were here for my workshop this is the too long don't read version of the workshop so you probably won't learn anything new so please go and see something else instead unless you're happy to sit here for 15 minutes and have me retell you bits of the workshop I won't mind if you leave and even if you didn't come to the workshop I still won't mind if you leave that's fine so why scaling agile doesn't work and what to do about it I didn't get straight to the point the reason scaling agile doesn't work is because agile is just one piece of the puzzle the industry standard process for software development in most companies is still a process that I like to call water scrum ball and the I stole this by the way from Gaia Forrester called Dave West and the problem is this even if you're doing kind of nice iterations here I'm going to try not to break all the cables will they do this yeah that's better even if you're doing kind of agile stuff here often it's the case who works in a company of more than a thousand people ok so he if you work in a big company and even if you work in a small company you probably have this thing where before anyone does any work you spend months you know someone has an idea you have to go and write a budget proposal it has to go through the budgeting cycle you have to go and do big long estimates and work out how much it's going to cost and how long it's going to take and then you have to do the design phase and then eventually some enormous document lands on someone's desk and then then you can start development and we're doing development in nice little iterations and that's great but you can't actually release anything before you do that your work and the work of the other teams has to be integrated and then it goes to central testing and then hopefully you manage to fix some of the bugs you found in testing and then it gets tossed over the wall to IT operations and only then does it actually go out to users and this is the point where we actually derive measurable customer outcomes from the work and so this overall cycle time or lead time depending on how you define the words the time from golf course to measureable customer outcome is usually pretty long and making this bit in the middle a j'l usually doesn't make a big difference it might make a difference of a few percent but I mean whenever I go and do value stream maps at companies it's usually like oh yes this bit takes nine months but let's not talk about that because we can't do anything about that and I'm like whoa that's the bit you need to fix because it doesn't matter how much you mess around with this stuff if it's taking you nine months to go through this you're not making a big difference on the overall outcomes so who who has who works at a place where you have this experience or maybe you have a friend who has this experience okay so I most of you alright so good you're my audience so first of all what tends to happen when you do kind of agile in this environment is you end up creating a lot of teams and the way that that is usually made to work is you still have to do all this stuff and then you take all the work and you break it down into lots of tiny little bits and then you hand the tiny little bits out to all the teams and they go off and do the tiny little bits and then at the end they come together and kind of try and make the system as a whole work and it doesn't and you have to spend weeks or months actually trying to make it work and when it does work you find out that it wasn't really what you wanted and you're like oh but now we have to release it because it's taken us a year or two years to get here and you know it's on scope it's on time it's on budget so yeh so you release it and then maybe like me you had the experience where you rolled off the project and you enter do something instead and then like a year or two later you were talking to one of your colleagues he's still on the project and they're like oh yeah that thing that we built no one actually ended up using it it was cancelled or you know went to market wasn't successful and that for me was the most miserable moment of my career this thing that I built that I spent a year on on this large team where we thought it was a success but then the thing actually never ended up making a lot of money for the company and after a couple of years it got canceled so agile doesn't fix that problem because fundamentally the problem is the ideas that you had at the beginning wasn't the right idea but it took you so long to find out that it wasn't the right idea there you were you know you were screwed and then you look at the decision making process so I love running surveys surveys are a really rich source of confirmation bias and I ran a survey with Forrester to find out about agile practices one of the questions we asked is how the companies make investment decisions for products so 47% of people replied that the committee decides from potential options so decision by committee so any four percent of people said they use some kind of financial modeling they use economics to make their investment decisions whoo that seems like a good idea for a joke we put opinion of the person with the highest salary wins out and you can see I spelled salary wrong here so sorry about that but 13% of people answered that that's how they made decisions this is called the hippo method the highest paid person's opinion and that really is the same thing as committee decides from central options so that and that are actually the same thing and then nine percent of people said they use a product portfolio approach which is also the same thing as this and then some sentient people you know which I thought was very admirable said that they take no systematic approach which is basically also the same thing so what we can see here is that 76% of people are not using an economic framework and 24% of people are using an economic framework which is kind of depressing so that's the state of software development and portfolio management investment today and in most big companies what you find is the output of this process is a few very big projects because somehow the annual budgeting cycle leads to a few big projects being the ones that get funded and so who has like a retirement savings account alright so I've got a product that I want to sell you if your retirement savings account and in my products I have four big things that I'm building and it's going to take me a really long time to build them and at the end we don't know if they're going to be successful or not and it's going to take us a really long time to find out but if they are successful they might make lots of money would you like to invest in my product no because you're not stupid and yet that's exactly what we do with the IT budgeting process for building projects in most cases we'd be better off taking our money and pressing it into unit trusts that would provide a better return on investment than our IT budget investments and the things that we do to mitigate the risk of being wrong don't work so the main activity what's the main activity we perform to make sure that we should really do the projects the main analysis activity huh business case and what's the main kind of analysis activity we use to create the business case so return on investment is the main thing we want to know about but what do we spend our analysis time doing to find out if we'll get rich investment I would love to to know that people actually put effort into finding out the potential business value that's not where I see most people putting their effort in analysis what I mainly see the effort being is estimation so estimation of cost that's where I mostly see people putting in a lot of time and effort is working with the developers to estimate the amount of work they're going to do the problem with that is it's pointless so there's a guy called Douglas Hubbard who wrote a book called how to measure anything and he's been looking at business cases for many many years and he does a kind of analysis called Monte Carlo in Monte Carlo you change the inputs to the business case and you see the impact of changing the values on what you care about which is normally either return on investment or lifecycle profits for the products those are the two variables you care about and what he found is that the cost has a very low information value varying the cost actually doesn't have a huge impact on return on investment or lifecycle profits which is what you really care about the single most important unknown he found is whether the project will be cancelled the next most important variable is the utilization of the system including how quickly the system rolls out and whether some people will use it at all and those are the two things that you care about is it going to get cancelled and when it rolls out are people going to use it and that's true for internal IT projects and for external light you know product development is are people really going to use it and that's the thing we don't normally spend a lot of time finding out and that's for me the whole point of the Lean Startup is how we can cheaply run experiments to find out if people will actually use it and pay you money for it and that is way way way more important than any kind of estimation activity and yet it's much more rare to find people doing that than it is to find them spending you know weeks or months doing in a very detailed estimation and then once you've done the estimation you take all the little bits you basically break it down a little bit estimated little bits add it all up and then you give the people all the little bits to do we're back where we started so the other problem with this approach is that we create these huge projects and when we create huge projects we batch up all these little bits of work together in one big batch which we then push through the process and part of the reason we do that is because the transaction cost of taking work through the entire value stream is so high and so part of the point of continuous delivery is to make it economic to work in small batches so we can get the feedback much more quickly so that's kind of a quick preview of the continuous delivery thing but when we batch up work what we do is we batch up that very high value things with lots of very low value things and you push them all out together there's a really good case study called black swan farming using cost of delay this is a case study that was done on masks which was the world's biggest or second biggest shipping line so a really big company with lots of IT projects and they had about 3,000 different features that were going through the value stream from analysis through development and delivery and one of the first things they did was try and estimate the value of these pieces of work and they did this using a method called cost of delay so if you go to this link here you can download the paper I really highly recommend this paper as a case study of how to implement lean agile at scale what they found that was really interesting is by using cost of delay to measure the value of the features so what cost ablai is briefly is how much it costs you per unit time per week in this case to not deliver the feature so every week that we're not delivering this feature how much is it costing us in opportunity cost to not deliver it and that's the way that you can prioritize work in dollar value by considering the opportunity cost I'm not delivering it and comparing it and what they found is of these thousands of requirements that were going through the system there were about three that it was costing the company you know between two and 2.8 million dollars per week to not deliver and then you've got this long tail of features where the cost of delay is you know in comparison with this and so it becomes very clear like what should we work on you've got this big IT team they should stop doing everything except these three bits of work and deliver these three bits of work as fast as they possibly can and stop doing anything else and when you're not even exposing the value of what you're doing and you know they were making assumptions to come to these numbers it can be quite hard to work out the dollar value of these features but when the variability is quite high it doesn't even matter when there's like an order of magnitude difference between the value doesn't matter if I'm wrong by you know a factor of two who cares still going to be really obvious which which the features are and Joshua Arnold's done a bunch of other projects where he's analyzed things in this way and he's found that this this shape this is a power-law curve this kind of power law distribution is very very common most places where they've gone and done looked at the the value of the features it's been this power law curve so this is pretty typical of what you'll find is that a few small very valuable things get bashed up with a ton of other stuff that's low value and then when you have these big batches of work going through the system is really hard to prioritize the individual bits so what do we do first of all the whole project paradigm is based on the assumption that the features and requirements that we've created are in fact the right thing to do almost always as we'll see that's not true they are the wrong thing to do about two-thirds of the time the features that we want to build have zero or negative value and are not the right thing to do I have data to back this statement up which I will give you the reference to in the case of new product development the numbers are much much worse it's more like 90% of the time the products the wrong idea so for established products with well known business models that work 2/3 of the time the features are 0 negative value with new products where there's a larger amount of uncertainty over 90% of the time it's the wrong idea so he shouldn't optimize for the case way we think we're right about what we're building which is what the project paradigm' does instead of focusing on cost and spending a lot of work on estimation we should focus on value and gathering information to justify the value of what we're building one of the most important things we can be doing is creating feedback loops throughout our delivery process and making sure that the decisions we make whether that's the products development decisions the investment decisions the requirement decisions are actually the right thing how can we get the feedback on those decisions as fast as possible and then you know within the development process we want feedback loops did I break the system did I introduce the performance degradation did I introduce a security hole this is part of the point of continuous delivery in the deployment pipeline is to create really effective rapid feedback loops within our delivery process the point of continuous delivery is to make it economic to work in small batches and working in small batches is what us what allows us to get these feedback loops working so that we can get those feedback loops operating at a high frequency by working in small batches we get feedback at high frequency and that allows us to course-correct much more quickly which is the whole point of agile and when we've done these things then we can enable an experimental approach to product development based on the scientific method I have a hypothesis about a feature that I want to build how do I test whether my hypothesis is correct what data can I gather to validate my hypothesis and we can apply this experimental approach not just to product development but also to process improvement and continually work to improve the quality of our development and delivery processes so the first idea that I want to attack is this idea of requirements whose requirements are they are they the users requirements users don't know what they want users know what they don't want once you've built it for them they're the requirements of the hippo the highly paid person and so I don't like this word requirements what we have is hypotheses and typically there's more than one way to achieve the outcome we want to achieve and a lot of the time we're not even thinking about the outcomes so who's who's who uses stories in their projects as a I want them so that yay okay lots of you okay keep your hands up if you've written a story where you kind of left out the so that because it was kind of a bit hard to think about it keep your hand up I have yeah most of us have done that at one time or the other and it's actually pretty common who's worked on a project where like the so that is missed out most of the time like more than 50% of the time yeah so you know that's life but I think that's I mean we all know that that's the most important bit but in the day to day kind of work process it's kind of easy to forget that because you know that it's important and we've got to do it so let's just do it I think we should focus much more on the outcomes so one of my favorite tools is this one by glyco ads they're called impact mapping it's a very simple idea but it's very powerful and his idea is this we should work backwards from the outcome so this is the outcome we want to create this is just an example right reduce transaction costs by 10% for a trading platform so this is a impact map for a trading platform reduce transaction costs by 10 percent who are the stakeholders well there's a German settlement team there's I didn't plan by the way that I would deliver this in Berlin just happen that way there's some traders there's IT operations here at the second level are the ways in which these stakeholders could help or hinder this outcome and then at the next level are the actual stories these this is the what what you could do to to achieve that outcome and when we do analysis what happens is you pick one and then you throw everything else away and then that one thing is what goes downstream to development and then tidy operations and that's a real problem a lot at the time the developers never even see the rest of this map they just go improve exception reports so that you know trading costs go down and often the number which is really like how are we going to measure the success of whether we're done is thrown away so actually having the key thing about the impact map is not creating the impact map the key thing about the impact map as with most agile practices is the shared understanding of the whole team which is created when you build the impact map so we have this thing in agile that's everywhere where you focus on the artifacts the artifacts not the important thing the shared understanding of the team that created the artifact is actually the important thing so here what's crucial is having the developers and the operations people and the business people all work together to create the impact map and then pick one like which is the thing that we think will be the smallest possible work for the biggest outcome because as Jeff Patton likes to say what we want to do is minimize output maximize outcomes so we want to maximize the outcome what's the minimum amount of work we can do to achieve this outcome and you pick one and then you design an experiment to test whether it will in fact achieve this outcome so the other thing I really like that fits with this really nicely is Jeff golf's template for hypothesis driven delivery so instead of the story format he has this format we believe that building this feature for these people will achieve this outcome wellnot is successful when we see this signal from the market what's the signal we're going to measure to prove that the thing we're building will actually achieve the outcome so we believe that building this feature building this feature for these people for these people will achieve this outcome this outcome wellnot is successful when we see this signal from the market what are we going to measure to demonstrate the feature will actually deliver and there's many ways we can measure the outcome using an experiment most of this comes from the UX movement and one of the big trends in agile is integrating UX but often what I see maybe you have friends you have this problem is the UX is basically making it look pretty that's not the point of view X the point of view X is to help us think about delivering things in a way that will satisfy our customers so the whole way in which our customers interact with our products and our company is all within the realm of UX including will the feet features will the products actually make our customers happy and our users happy so this is from Janice Fraser and she divides up user research based on whether it's quantitative or qualitative and then whether it's evaluative or generative so generative in design thinking there's two phases there's the bit where you generate ideas which is where you can have come up with lots of different options so building an impact map is an example of a generative behavior and then evaluative is when you narrow down the options and pick one so that's a conversion activity that's the process of deciding which option is the right one so you can use user research to create the whole bunch of ideas and then you can use different techniques to find out which of those ideas is actually going to produce the required outcome but there's loads of ways to do this that don't involve actually building out the whole feature and so you should be thinking very carefully about how we can gather data without building out the whole feature and then finding out whether it was a good idea because usually what happens is even if you find out it wasn't a good idea there's the sunk cost fallacy you don't want to admit you spent all this time building this thing that's useless who's actually deleted a big feature from their products in the last year okay that's pretty good actually I'm pretty impressed did you delete that because you were like this isn't delivering us valley cool well good for you that should be more common than it is so in terms of an alternative to water scrum fall I like to show this slide from Amazon this is from 2011 they're about an order of magnitude better right now they are making changes to production on average every 11.6 seconds up to a thousand and 79 deployments per hour up to 10,000 boxes receiving deployment up to 30,000 on average 10,000 up to 30 thousand boxes receiving deployment he was certainly calls talk last night okay lots of II so I went spend too much time on this but the reason one of the reasons they did this was to get this which again Nicole showed so ron iike harvey helped to build amazon's experimentation platform he was in charge of amazon's experimentation platform then he went on to work for microsoft and was in charge of the Bing still in charge of being experimentation platform so he has loads of data from a feature development and this is I mean this is the important piece of data we could be spending two-thirds of our time on the beach skiing at home with our kids if only we knew the two-thirds of the features that we builds that have zero or negative value to our company and people don't think of the negative value case the case where you're actually making things worse and that kills us in three ways firstly there's the opportunity cost of not building something the would've delivered value second features add complexity to our system they have to be maintained forever so the maintenance cost and third that complexity slows down the rate at which we can add new features so it makes new feature development slower so if you're not doing experiments to find out if your features are actually valuable you're killing yourselves in three in these three ways is the biggest source of waste in software delivery is the stuff that we build that delivers zero or negative Valley and so who is actually running experiments to test the value of the features and products they're building okay there's a few of you maybe 10 hands went up out the whole audience like this is the biggest thing in my opinion that we need to change our industry is be thinking about how we can actually gather data to validate whether the things we're doing are actually delivering value for our customers and it doesn't need to be a be testing it could just be other forms of user research the cool thing for me about software is that software has different characteristics from architecture in to in in terms of buildings right so people often say I wish software could more like buildings meaning that they wish it could not fall down and be reliable and be secure and things like that but actually I think that's that's false we don't want software to be like buildings because software has these really unique affordances software is easy to change even at scale compared to buildings if I want to really check this building I have to pull it down and start again from scratch you don't have to do that with software helps of people do do that with software which is unfortunate but you know you can actually rebuild the plane while it's in flight with software you can make architectural changes incrementally it's entirely possible to do that people have done that very successfully software is much cheaper to change you can get value from software before you finish building it like I you couldn't be in this building if there were no ceilings here but I can develop a piece of software which doesn't serve all of your needs but might serve the needs of a small segment of you and I can and you might be able to get value from that from really early on so software has these unique affordances that makes it much easier to experiment and try things out and we can't do that with buildings and with physical architecture and what's cool about that is we can use the same techniques for experimental product development that we that the lean movement has been using for years for process improvement so to show you what I'm talking about I'm going to give you a case study and the case study is from HP LaserJet firmware division I've talked about this before maybe you've seen me but I'm going to focus on how they made the transition so this was a team that in 2008 had this really big problem which is that they were on the critical path for software development every new range of printers they wanted to launch they were going to have to build new firmware and that was going to take longer than actually building the chips that went into those printers so that was a really serious problem I actually did some work for a European airline and they were introducing you know those the economy seats with bigger gaps between the seats right like Premium Economy more legroom it was going to take them longer to change the booking system so that you could book the seats the net was going to take them to put the seats into the aeroplanes right which is how you know there's something really badly wrong with your software delivery process so there's the same thing here it's going to take them longer to build the firmware than it was going to take them to fabricate these custom chips which had a one-year lead time to be able to build so they wanted to they tried a whole bunch of different things and eventually they decided to look at their software delivery process what they found is they were spending a whole bunch of time doing non value add things like code integration very detailed planning porting code between branches every time they released a new line of devices they took a branch in version control which meant that when they fixed a bug in one line of devices and they needed support the bug fix they would have to port it across all these code branches same with features they were spending 25% of their time porting features and bug fixes across branches 25% of their time on product support what is this tele quality problem 15 percent of their time on manual testing if you subtract this from a hundred percent what you find is that only five percent of their investment was actually being spent on building features and by the way all the product management people you know they would say can we get budget for you know adding test automation can we spend some time refactoring and management be like no no you've got to build the features who's had that happen to them right and it's very clear to see the fallacy in that in that idea because here's the thing the reason that there's all this pressure to build the features the reason you're going so slowly is because you're spending all this amount of money on things that are not adding value the only way you can get that problem is by fixing this by removing this that will allow you to go faster so you have this vicious cycle you know pressure you have to fix you have to do the features why is the pressure there because we're doing all this other stuff it's so painful to deliver anything and if we don't fix this it's always going to get worse and worse and worse until we're just driven into the ground they also looked at their cycle times is taking a week to get changes into trunk they were getting one or two builds a day and it six six weeks to do a full manual regression on their software before they could release it so they didn't want to do continuous deployment they didn't want to release new firmware ten times a day I mean who upgrate who's upgraded their printer firmware okay as a few yak shavers in the audience that's good most the time you don't want to upgrade your firmware ten times a day but what they found is by implementing continuous delivery by working in small batches it changed the economics of the software delivery process because it created feedback loops that enable them to build quality in rather than try and test quality and at the end and over the course of two years and well first of all they re architected in a big bang' way which is always very risky that I don't recommend it but in this case it works and I'll explain why later but they really tected such that they didn't have branches for different ranges of devices anymore instead they had one firmware build that would work on any set of devices so the firmware boots looks at the hardware profile says oh I'm a printer I'm going to turn these features off I'm going to keep these features on or boots looks around says oh I'm a scanner so I'm going to turn these features off and turn these features on so it's basically feature toggles for architectural differences so a guy I know called danbo dot has a saying feature branching is a poor person's modular architecture so this is a case where they re architected so they wouldn't have to use branches and that allowed them to work on trunk and do continuous integration and build a very sophisticated test automation suite where they had over 30,000 hours of tests that would actually send signals to the logic boards and then get signals back as part of their test automation so anyone who complains about test automation for websites I actually I don't have it today but I often carry around a copy of this book to spank people who whine about test automation because now here's people who did it with logic boards that's super cool so they built a very sophisticated set of automated tests anytime anyone act in they used get so every developer so 400 developers by the way so at scale 400 developers on three different countries Brazil USA India or pushing changes in to get they each had their own little repo and they built a CI tool that looked at those repos anytime so and pushed at range it would take that change and run two hours worth of automated tests against that change in a simulator if that works it gets promoted to the stage two and then what happens in stage two is all the changes in the last time period get merged and then they run automated tests in the simulator against that merged set of changes if that fails the developers get an email here the test that broke here's a button to be able to reproduce those test failures on your development workstation that's super important if developers can't reproduce the acceptance test failures that's a very serious problem it's an architectural problem that you need to fix if they succeed that's the only way you can get into trunk so to get into trunk anything that passes stage 2 gets into trunk and that's the only way the developers can get into trunk so this basically fixed the problem where the bills always read because the developers don't care about it because guess what the only way you get into trunk is if the the bill is green out of level 2 this runs on a big rack of servers in a simulator another 2 hours worth of automated tests then you get promoted to level 3 which runs actually on physical logic boards with emulators and then if that works then it gets promoted to the overnight level 4 and that's the entire 30,000 hours worth of test run in parallel so you get the results overnight and so you can find out if the firmware is releasable within 24 hours of making the check-in so they completely removed the six-week stabilization phase that just goes away and the whole point of continuous delivery by the way is to get rid of this whole thing like this whole thing should all go away even if you're not releasing 10 times a day you should be able to release yourself I should always be provably reducible on a daily basis that's the point of continuous delivery and so even if you're not doing continuous deployment that produces huge benefits they massively reduce the amount of time they're spending on code integration and on planning and on porting K between branches product support activities goes down from 25% of costs to 10% of course what does that tell you higher quality so this is how they measured the quality improvement manual testing goes down for 15 percent to 5 percent costs invested in innovation go from 5 percent to 40 percent so 8x improvement in productivity measured in terms of the amount of investment going towards innovation what do the arithmetic in the audience notice about these numbers less than 100% because there is a new activity creating maintaining evolving the Suites of automated tests if you enter your manager right now and said please can we invest 23 percent of our budget in test automation what would your manager say that's a rhetorical question and yet here we have the business case for doing that the business case is quite clear despite the fact you've got 23 percent of your budget spent on test automation that has enabled an 8x improvement in cost spent on innovation because it's massively reduce the amount of non value I'd work we're doing and this is how lien works lien works by investing in removing waste so that you can increase throughput and the reason I like to talk about this case study is because they wrote it into a book and you can get the book and here are the numbers to take to your CFO to explain why they should investing its newest delivery what's interesting what I'm going to spend the last few minutes on out of 10 minutes right yep is how they did it because how they did it was really interesting they did not create a three year plan detail of how they were going to do this what they did is they had a goal their goal was to fold one get firmware off the critical path goal to 10x productivity increase so those that was it those that's all they wanted to do they had no idea how they would do it but they set these two very measurable easy-to-understand goals like everyone in this room can understand those goals it's really obvious and it's really obvious when you've achieved them and when you haven't achieved them that's super important anytime you're implementing agile or DevOps or any methodology step zero is to understand your goals in measurable terms what goal are we trying to achieve how will we know when we've achieved it there's acceptance criteria for organizational transformation what's our goal if you if you implement agile because agile is cool or because it will make things better you will fail you've got to understand what the goals you're trying to achieve are for your organization in measurable terms and then we want to take an experimental approach to achieving those goals so what some intermediate goal which will get us some of the way there let's try a bunch of things out to find out if we can achieve that intermediate goal if we don't succeed we've learned something but we haven't spent too much time learning it which is important and if we succeed that's great let's try the next step so this is crucial we're taking an incremental approach to process improvement and to changing organizational behavior in the same way that we take an incremental approach to product development and that for me was like the big aha moment I had writing the book is well guess what the incremental iterative approach doesn't just apply to how we do product development it also applies to how we do process improvement and how we improve our companies and our processes and it's something we should be doing habitually as part of our daily work and what was cool is that this was a case study for something that only got written about subsequently which is true I mean they were doing it in its livery before I wrote the continuous delivery bug and they didn't call it continuous delivery they called it getting better at engineering right didn't need a name is he was like oh yeah we're just trying to get better at what we're doing and and in the same way they also simultaneously invented this thing the improvement cutter so a cutter comes from Japanese martial arts it's a habit it's sorry it's a base practice that you practice over and over again until it becomes habitual and it's the same idea whereas music you practice scales over and over again that's the first thing you do until you get really good at it and then you move to higher order practices that combine the things you learn same thing with sports you learn the basic moves for tennis and then you're able to combine those any human creative activities the same thing you learn the basic moves first and then you until they become habitual and then you combine them into higher-order creative processes so his point is with this book improvement cutter based on the study of Toyotas management method is that improvement work getting better or what we do should be a habitual process and here's the basic practice that you have to practice over and over again in order to get better first understand the directional challenge so HP LaserJet 10x productivity increase get firmer off the critical path number to grasp the current condition current condition step 3 establish the next target condition so I'm going to show you what a target condition looks like but it's basically an intermediate measurable set of goals for the program as a whole and then you don't plan how you're going to do it instead you specify the outcomes and then you allow the people to experiment with ideas to achieve those outcomes in the same way as we do with product development and then every day everyone's asking themselves what's the challenge we're trying to achieve right now what's the actual condition what obstacles are in our way what are we going to try next when can we see go and see what we learned from taking that step and so the difficult bit of this is working out the targa condition so I'm going to give you an example from HP LaserJet firmware this is 30 months in but these were their entire goals for the whole program of work for 100 people on three continents this was their plan for the month it fits on one piece of paper and this is quite long when they started their monthly plans for the program of work were three or four bullet points but crucially they had measurable out outcomes so for month 30 it was Priority One issues open less than one week level two test failures 24 hour response final priority one change request fixed reliability error rate at release criteria and these are ordered by priority so you you want to get these ones done before you start on these ones these ones are the most important so these are all measurable and here's the thing because this is the only thing that's on the program plan and it's just the outcomes it's up to the teams to experiment with ideas to achieve those things in most of the scaled agile frameworks where you break up the work into little bits you hand it out to all the people and then you come together at the end and it doesn't work properly everyone says well I did my bit it was those guys now usually that's not true usually actually everyone did do that bit the problem wasn't that people didn't do their bit the problem was that the bits didn't fit together or weren't the right things to do it was an analysis problem in this model nobody succeeds unless everybody succeeds there's no I did my bit either we all succeed as a team or we fail as a team and because it's only one month the impact of a failure isn't too bad the point of failure is to expose problems so we can fix them not so that we can blame people for being done well no guess what the reason we can do it why can we do is because of this Oh our assumption about this was wrong so let's make different objectives for next month that address the underlying problems so I guess my conclusion for this is we want to take an experimental approach not just to product development but also to process improvement the processes that you have are the wrong processes and you'll know I have the right ones but the important thing is to constantly think about improving your processes with the particular goal in mind and we should all be doing that if you leave your development process is the same they don't stay the same what happens is they gradually degrade over time and become worse and worse if you're not constantly working to improve then by default you're gradually getting worse and so I want to end by just what I started with don't autumn eyes for the case where you're right assume that what you're doing is not the best thing you could be doing focus not on cost not on estimation but on how we can best provide value in our products and in the processes that we're running build feedback loops and grow and water and tend to your feedback loops to validate your assumptions and expose your assumptions work to make it so that you can get feedback in small batches both for your product development and for your process improvement work so that you can take an experimental approach not just a product development but also to process improvement so hopefully you have a couple of minutes for questions in the meantime please remember to rate this session I value your feedback also if you want a bunch of my free stuff email Jess humble at send your sliced calm with the subject DevOps and you'll get a bunch of free stuff including these slides so that's Joe Sample at send your sliced calm DevOps questions Marcus has a microphone questions who's first god I have questions from the tool okay which are about one guy said you know it's easy to do the split testing and so on yeah it's not as easy is analyzing and in a similar vein it's like great that we should evaluate cost of delay or value but how do we do it how do you do there's loads of different ways to do it I could and have spent a whole day talking about mechanisms to do that cost of delay I really recommends looking at this case study from mask-like they talk a lot about cost of delay in this case study and how to evaluate it but the key thing is this anytime you have an idea you're going to make assumptions and people tend to find fight about the ideas this idea is good know this idea is good what you want to expose is what assumptions are you making and let's test the assumptions and find ways to test the assumptions and there's loads of way to do that like one of my favorite stories is from Zappos so Zappos sells shoes online the first version of the product there was no supply chain what happened was any time someone ordered some shoes the guy would go to the shoe shop buy the shoes and then post them so this is the art of experimentation is finding cheap ways to achieve to run experiments without actually building the thing and I agree that it's difficult but you know we're all software developers we didn't get into software development because it was easy right so you know when people like oh that's difficult I'm like yes of course it's difficult I think if you want an easy job you know you came into the wrong field and but I what I do agree is that it's not a very well known not very well established field so what I urge you to do is come up with ingenious ways to experiment test your assumptions and then blog about them and share them because as a community we all need to get much better at sharing our ideas for doing these things increasing a body of knowledge around it one more question ok I hope over hi hi I would like to ask how how do you start with this because I think most of the guys here are at the like not the top of the food chain in a company and and so do you do you recommend rather buying the books and getting it as a present Christmas present to your classes or or or starting at your position to experiment and ensure the good results so I'm certainly not going to complain if you buy my book for your boss but I think you need to do both right there's a part of this which is guerrilla experimentation like try and find smart ways to to do this in your own backyards and experiment with it and gain confidence they can actually work and find other people in your company you have who want to try this stuff like you know the best hack of all is finding someone who you don't normally talk to in your company and taking them out for lunch like find your database administrator take them out for lunch find out why they hate you find out one thing you can do to make their life a little bit better and do that and you know do that once a week just randomly pick someone in your company you don't normally get to hang out with try and pick the person who you're like cursing under your breath when something doesn't work right you're like that in UX again they've sent me this terrible screenshot and I can't like there are idiots go and take them out for lunch go and find out what's bothering them there's a great quote from Jesse Robbins he dirt he says don't fight stupid make more awesome and I think that's very important try and find ways to make things more awesome and find the people who you think are being stupid find out why it's normally because they're really frustrated about something or they just don't know what's going on because we don't have feedback loops and try and find ways to help make that person more awesome and if we all did that everyday that would be really cool what I will say is it's hard to create lasting change without executive support and the reason that executives change their minds is usually because there's a disaster so as a book called by John Kotter called leading change and he says the most important thing for a organizational change to be successful is a sense of urgency so find people all who are feeling the pressure those are normally the people who are willing to try out different things and if you can get those people on board at the senior level and the people in the middle feeding urgency and the people on the ground feeling urgency that's normally a good place to start people with the level of urgency and and the capability to do something about it that's your sweet spot for this kind of thing so again not an easy problem but those are my two ideas for helping get started with that stuff thank you very much
Info
Channel: GOTO Conferences
Views: 235,844
Rating: undefined out of 5
Keywords: GOTO, GOTOcon, GOTO Conference, GOTO (Software Conference), GOTO Berlin, GOTOber, Jez Humble, Agile, Agile Methodology, Scrum, Scaling Agile, Software Development, Computer Science, Videos for Developers, Boundries of Agile, Programming, Software Engineering, Chef, Agile Manifesto
Id: 2zYxWEZ0gYg
Channel Id: undefined
Length: 51min 2sec (3062 seconds)
Published: Wed May 04 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.