The Future of Software Engineering • Mary Poppendieck • GOTO 2016

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

FYI, here's the talk Abstract

2020 used to be far in the future. Today it’s four years away. We no longer need to guess what breakthroughs await us in that magic year, the future is hiding in plain sight: a reliable Cloud, industry-disrupting Platforms, massive data from the Internet of Things, really useful Artificial Intelligence, surprising Virtual Reality...

The question is not what the technologies of 2020 will be – that is rapidly coming into focus. The real question is: What is value? What’s important, what isn’t, and why? Should you focus on Continuous Delivery? DevOps? How do you get from where you are now to where you need to be? How do you scale? How do you keep your systems reliable and secure?

This talk will discuss how software engineering is changed by the emerging digital technologies.

👍︎︎ 6 👤︎︎ u/goto-con 📅︎︎ Mar 26 2019 🗫︎ replies

She says federated systems are the future, moxie from Signals says the opposite. We'll find out soon enough who's right :)

👍︎︎ 2 👤︎︎ u/[deleted] 📅︎︎ Mar 26 2019 🗫︎ replies
Captions
GOTO; conference for developers by developers gotocon.com So I think you've heard that the future is already here, it's just not evenly distributed. So if you want to know about the future, like, let's say 2020. Actually, that's now that far in the future anymore, maybe 3 years... ...or 2025. All you have to do, is look, sort of, in the corners of what's going on today. And, in fact, at this conference you're gonna be doing quite a bit of that. And, you'll get a pretty good idea about where the future is going to go. Let's, however, go back a ways, maybe 20 years or something like that. And... The first thing I wanna talk about, is - scale. Getting big. Because, actually, 20 years ago, things that felt big, are really small today. And... things that are big today, have scale in a particularly interesting way. They've scaled broad, rather than tall. So, if you were a penguin in South Georgia island... All these pictures, by the way, were taken by my husband Tom, who's a great photographer. And... this is king penguins, and when they don't have enough space, they go out to sea. You know, they don't climb up more mountains. And... so, let's talk about what happened 20 years ago, in 1996. What was the state, of what was going on there? Most of the software development at that time was business software, or control software. When I did software development, I was a process control engineer, I controlled great big pieces of equipment with my code, which, I'm bias, but I think that's the best software in the world. And... so, I wasn't involved in this stuff that they call IT. But, most software was done, at least the literature, to read the literature, was all about great transaction processing, mostly in enterprises. And... this is just as the internet started to gather some software. I remember in 1999, when somebody was saying about how fast the software, how big software was, and how long it took to change we're talking a release of a year was fast. I can remember reading in Computer World, that there are some teams out there, that have learnt how to deliver software in 6 months. Can you imagine that? So it was really slow to change. And... I thought, you know, there is an awful lot of new software that's come this is 1999, and was talking to some colleagues. And it's called the internet, you know, that's a lot of software that's been developed in the last few years, and, they looked at me, and said: The internet? That's not software! Yes it is. But it's different kind of software. So, back in 1996, we thought about transaction processing as: here's a database, an ERP* database, when I was doing process controls, systems I followed, when on my system to a manufacturing plant* And I headed up the information technology office in the manufacturing plant. And we did something called MRP, called material requirements planning, which meant we planned all of the different inventory movements, that happened to the plant, and all of the production workstations. And that eventually expanded to the whole enterprise, so it got to be called ERP (enterprise resource planning). And enterprise resource planning was centered on a database. The corporate database. The one, single thing, which, sort of, integrated all of the applications in the enterprise, and still does, actually. And it's set on a single server. And, well, why is that? Well, you know, there's this theorem, called the CAP theorem, you may have heard of it. Which basically says, if you've partitioned your database on to different servers, there you can have one of two things, but you can't have both. You can either have it available instantly when you want it, or you can have a consistency, consistent, you can have consistency of... across a transaction, the whole transaction or none of it. But you can't have both. And so, because the database was the thing that provided us with both the availability and consistency of transactions, we had to have it on a single server. So, if you look into the late 90s, all of the databases sat there, the corporation enterprise sitting on one, big computer. And integrated through that computer. And that was like, the only way to go. But then, you know, along came Google, and suddenly there was a whole bunch of data, that didn't fit on one server anymore. In fact, if you have 1.3 billion pages, this is about 2001, it's just not gonna work. You have to go across a bunch more servers. You have to figure out how to do that In a way, that allows you to respond very rapidly. So there's two ways: you can go big, or you can go wide. Okay? Tall or wide. You can either get a much bigger computer, and interestingly enough, in about 2000, I think it was, Amazon was sitting there with a bigů big front-end, and one big back-end, you know that single database. And, at every holiday season they absolutely couldn't handle the transaction volume anymore. And, so, you know what they did? They got a bigger computer. And, you know what happened? Okay, if you know anything about chewing theory? You could go to bigger batch, and you'd get slower. And they got slower. So they said: well, that's not gonna work. So this is Grace Hopper, you've heard of Grace Hopper? She's the, sort of, you know she's the one that designed Cobol and all of that. And she said, if one axe couldn't do a job, they didn't try to grow a bigger axe, they used two axes. When we need greater computer power, the answer is not to get a bigger computer, is to build systems of computers and operate them in parallel. Managed by software. That's the whole concept of scaling out rather than scaling up. So, you get more computers and you figure out how to deal with a fact that they have problems. So, for example: here is one of the first hardware racks from Google in 1999, it's in the Computer History Museum now. And, when I first heard about how they were storing all of those billions of pages, I was rather surprised they used these really cheap stuff and they expected it to fail, and they kept multiple copies of stuff so they could just toss out the stuff that fails. Like, what a good idea? And they managed it with with software. So, the concept was actually kind of radical at first, but really the only way to go So, the whole idea, if you're gonna search the whole internet and you're gonna provide responses instantly, is that not only do you take your hardware and spread it out, you have to do the same with your file system. So here is the early papering 2003 and the Google file system. And: same concept. You take tiny chunks of your file, you throw them all over the place, you make sure you got good 3 copies of everything. And then, when one of them dies, that's okay, there's two more, you just sort of throw out the stuff that dies and regenerate it. And it's that whole concept that your whole underlying system is fragile - no problem. You manage it, and you have multiple instances so you can deal with it. And then, of course, you've got your data, that you have to search instantly scattered all across these files. And the year later, in 2004 they came out with the paper - the MapReduce. Which became the basis for Hadoop. Which, sort of, spread through, about... took from 2004 to, maybe, 2010-12 before it emerged suddenly, as this wonderful thing, that everybody could use it took a long time to get Hadoop from handling just a few files, and just you know, just a couple of computers, to something that could deal with massive amounts of data. And, it was done out of the Apache open-source project, and... but, I think it was Doug Cutting, who worked on it, went and worked at Yahoo. He was hired by Yahoo, because they decided, maybe they better go to this mapreduced thing too. And, between the various big companies and Silicon Valley at the time, the whole underlying structure of Hadoop was developed over about a 5-8 year period. And, there was an awful lot of sharing. because Doug decided, that it was going to be open-source, and he would be happy to work for Yahoo, but - had to be open-source. And it's that thing that we do in the software engineering world: we share our knowledge so that it can grow a whole lot faster. And I don't think without the sharing of all of the different knowledge of how these file structures and databases work, I think if these companies, kept it proprietary, we'd be a long ways back from where we are now. So that's one concept: you scale out with a file system. When you look at your architecture it's the same concept. You might think you've got a big monolithic database, but if you have a big, monolithic database: you're not big. Because if it's all in one spot, it's actually pretty small. In order to really get big, you have to scale out. So, let's go back to that Amazon in, as I said, about 2001, and there they are with a gazillion transactions, and they have to handle all of these transactions all at the same time. And... The first option, was to scale up. Okay, so take a single transaction, let's say it's oh four things: there's shopping, shopping cart, you know, browsing, shopping cart payment, and fulfillment. Okay, and if you take a look at that, then, you can scale a whole transaction, maybe even across in regions, different computers. If you want to. But, if you wanna scale out, you scale differently. You take your transaction and you say: ugh, you know what? I'm not gonna let the database handle it anymore. I'm gonna handle all that transaction all by myself. So I have my 4 different services, say the browsing and the purchasing and the fulfillment in the payment. And if I have a few that are bottlenecked, I'll just scale out at that small area. It turned out to be pretty much the way that people have figured out how to scale since, as they needed microservices, one before a lot of other companies. And they did it because the idea was: if we are going to have to handle this many transactions, we are gonna have to think differently about how we scale. Amazon had a meeting of the executives after one of these disasters, almost disasters holiday seasons. And they said: how are we gonna deal with this whole thing next year? And they came to a conclusion - the conclusion was - we have to communicate better. Have you ever heard that? Kind of, sound familiar? You know, I think it's kind of a cap out. If cap out is a good word in English, it's just I don't know what you say in German for cap out... It's a... what's the good translation for that? [...] [something in German] Did you guys hear him? So it's an interesting excuse, but it isn't a solution. In fact, I hear the same about training - oh if we just had more training. Well, you know, and Jeff Bezos says: What we need is less communication instead of more. We communicate too much, it doesn't scale, not at the scale that I intend to scale this company. And so the idea's NOT to communicate more, the idea is to communicate less. And you do that through an architecture, which doesn't force people to have to communicate. So, he decided, they decided that they were gonna break the company into much smaller teams and put those teams on chunks of software that didn't have dependences, so they wouldn't have to talk to each other. They were relatively autonomous teams, as autonomous as they could make them, without dependences as much as possible, and they were what Jeff Bezos calls a two pizza team, they could be served lunch with two big Seatle size pizzas, yes. So maybe 8 people or something like that. And they own the whole service - cradle to grave. Idea, design, testing deployment - everything - monitoring and production. And, so you have these two pizza teams that deal with all of those transactions coming through. They might own the shopping cart service, or they might own the recommendation service, or something like that. So, that concept of scaling out, rather than scaling up only works if you are using software to manage the stuff you use to have depended upon the hardware or depended upon the database. So, let's go onto infrastructure as code, and continue our story. Because, I've just talked about scale out, and I wanna talk about infrastructure of code - is another important, hugely important thing happening in software engineering. So, have you of Conway's Law? Most of you have, for those who haven't it sort of says: your organisational structure and your system architecture are gonna match, no matter how hard you try not to, your architecture is gonna drive your organisational structure, or your organisational structure will drive your architecture. Now, if we go to Amazon, that then broke up into these two pizza teams, they now have an organisation of structure, that drove their architecture. So, what happened is, the two pizza teams were supposed to be autonomous, yes, and so they had autonomous teams, and, autonomous means independent deployment, that's what i'm gonna define it as. It's not - I get to make my own decisions, It's I get to deploy without having to talk to all the other teams. That's what it kinda meant there, which basically created havoc in operations. If you're an ops, and all of a sudden you had 50 million teams trying to deploy all independent of each other and you hadn't actually handled all the dependences, this could be a real nightmare, and it was. So Chris Pinkham, who was the vice-president of infrastructure thought about Amazon and thought about the problem, and said: You know what we ought to do? We have to have self-service for development teams. You know, instead of dumping all their problems on us and ops, we let them handle their own problems, we just give them the code to deploy themselves. And he said, you know, maybe we could even sell this capacity. And if you know anything about Jeff Bezos, who was the CEO there, you know that if you have something he might be able to sell. he was truly interested. It took a little while for that interest to, sort of, percolate up and then down again and meanwhile Chris Pinkham went back to South Africa, where he grew up and, when they finally got to his idea and said let's do it, they chased him down in South Africa and said: say, why don't you put together a team there, were you wanna be? And see if you can make this idea work. So he did, he assembled and led a team. And in two years, they came out with EC2 - Elastic Cloud - Elastic Compute Cloud, which was launched in 2006. And, as I say, the rest is history. Here we are, 10 years later, and Amazon web services is making like about 10 billion US dollars a year, or something like that. And growing. Last, you know, I used to say, a billion dollars a quarter, but it's even more now. And it all came from this concept of these independent teams, that could do something that really could be separate, so, if we think about that, then - I wanna propose that there's a cloud in your future, and I don't care if there's regulations to keep it from being there, as if you're in a bank, or if you're in a telecom, or if you're in medical area, where actually, right now there isn't a cloud - there will be. And the reason is, because from economic point of view, cloud is cheaper, generally speaking, most clouds are cheaper, more stable, more secure and more expandable by far, than most on-premise data centers. So if your IT center is about cost, and it becomes possible to overcome those issues of regulation and things like that. Then there will be a cloud there because it will just plain be cheaper, simpler, more secure and so on. Even if it isn't today, it's almost certainly going to be. And then there's [whoops, sorry] a technology, so if there's... let's say you're in a bank, okay, banks currently are all on-premise, or sometimes hybrid, and there's stuff that really, really has to be... on-premise stays there. But, right now GE has a cloud called Predix, which is an industrial cloud, it can like, for example manage electrical grids, it was the underlying technology, that managed the electrical grid in London during olympics, to make sure, that none of the venues lost power during the any of the events that were there. Now, if you can put IoT devices on electrical grids or aircraft engine, which GE does too. And send that information to the cloud to some place, you have to be able to do it securely. That's what they do, they do this stuff with a high degree of security, and they do it matching whatever regulations of the governments that they're working with. So, their cloud is, probably quite secure, and matching regulations in industrial areas. And they have just put a medical device cloud together, which passes, at least, all the US HIPAA regulations, which, you know, people said couldn't be done but, they've figured out how to do it. So if you're in a bank, I'm gonna bet, that in a few years, some bank out there will come up with a banking cloud, which matches all of the regulation and security requirements that you currently thing can't be matched. And then, you go back to, that cloud will probably be cheaper, more stable, more secure and more expandable than what you happen to be working with right now. So the economics of it will be incredible. So I bet there's a cloud in your future. And, not necessarily a hybrid cloud because, people are doing hybrid clouds, but the right use case for hybrid clouds that really makes it the right thing to do, has yet to be discovered. Now, if you're going to a cloud, you're gonna understand that applications which are designed with traditional data architectures, are not going to work very well in a cloud. Because cloud is distributed, because a cloud wants to be sitting on top of a bunch of... it's like building a house on a sandpit, and you have to figure out how to deal with building a house on a sand, when you're in the internet, or you're in the cloud. So, the other area, that I wanna talk about is infrastructure as code, because that's sort of part of this - how do I deal with the whole cloud environment? Because that's what it is. It's infrastructure as code. And you can do it internally. But eventually, you're probably heading towards some mechanism of cloud. Containers, as there's gonna be a lot of talks about containers, at least one or two, I won't talk about those today, but containers are this thing, that creates a process, which allows your development to be easily put into a secure, standardised mechanism, so when it's deployed you don't have to worry about any of the downstream stuff, [inaudible name]* likes to call containers a process that allows you to have a standardised environment. And then there's serverless architectures, and, I think there was talk about this yesterday, serverless is my kind of thing, because remember what I did when I was doing software development: I was programming pieces of equipment and we didn't have this, I mean, our equipment sent interrupts and we waited to listen to those, and we responded to them, it was an event*-driven environment, which to me is a really comfortable place to be, but to a lot of programmers who are used to procedural programming, it's kind of: huh, what's this all about? But, the idea of being an inventor of an environment is really important when you get into the internet of things and most people that are involved in serverless... any type of, sort of, variation in response to staff out there in the world, if finding there's a massive economic advantage not to have to have staff sitting there, waiting for events to happen. It just costs a lot less. And then software to find networks, so forget the network hardware anymore - there's a really interesting paper by Google just recently out, on how they structure all of their networks with software - no more hardware anymore, because it's just not big enough, it just can't handle the kind of volume, that they're heading for. So, when you think about that, when you think about where this whole, sort of, cloud areas going, there a whole new technology stack you have to learn how to deal with. Back in the, you know... 20 years ago timeframe, we had a thick client-server app, we had some, you know, runtimes and some middleware, and some operating systems. And then, we had some monolithic database sitting underneath it. And today, it's much more like a thin app on a mobile tablet or something like that. Developers will assemble an assortment of the best available services, which will change constantly, and running on whatever available hardware happens to be around. So, if you think about that, then the real problem we have in architecture, is a dependency problem. The fundamental problem we're seeing in software engineering from our history, is we've got this thing called an enterprise architecture, have you seen this picture, anybody? - Some people have, right? It's kind of, I was giving a talk to some enterprise engineer, so I went and looked up what it was all about, and that's what it's all about. And the problem here, is that there's this one database, that's sort of the magic integrator of everything else, but if you think about it, it's also the magic dependency generator of everything else. That's what is is: a massive, big dependency generator. And, so it's a scale up approach, and if you really wanna get big, you can't scale up, you have to scale out, to get seriously big. So, you need a federated architecture of some sort. Not necessarily microservices but, in any environment at all where you want to be able to get really big you need to scale out, so you need some sort of federated architecture, where the data stores are within the federated architecture, not some central thing, that links everything together. So this fundamental concept of a federated architecture is where we have to go in order to get rid of the dependences that are killing us, delaying our deployments, and all of that sort of stuff. And, that's kind of the architecture you need to have, if you want to move to the cloud. And I don't think... although a betcha a bunch of you are sitting on the left side today, I don't think you can stay there. It's... your future has to be over here, where there's some sort of federated architecture. So you need to learn how to do so software engineering not with databases, but with API's. Which means they're just every bit as important, valuable, and subject to all the kind of constraints that are databases used to be subject to. All sorts of things like: you have to have good hardened interfaces, you have to have good directories, you have to have standards, by which the interfaces change slowly, you have have versioning on changes. It's almost like, just a different way to look at the database, but a way that can be localised. So, if you think of API's as architecture - what they do is they enable service architectures, they lower that integration friction*, because it's only just between me and the other service over there. It's not me, and all 50 of those other services. And they have local persistence. So, our SQL databases are rapidly being replaced by the stuff that spreads across all of those different files that you saw in Google file structure. Instead of just one. And we don't worry anymore about - we don't have space to put it in our database - we have space. We're gonna worry about our database later, [inaudible], we know what information we need, and we're gonna put it where it belongs, as you heard in the - just before this talk - we're going to just gather it, and put it out there in some format and figure out later - how to deal with it, and what to do with it. And those are just a whole different concept, than the way that we used to think about databases just a few years ago. And you can also think about API's as a product, so API's as a product means an API needs to be owned by a team, who understands who are the consumers of that API, and who focuses on it. And who evolves the capabilities of that API. So, if you take a look, for example, at anything in the internet of things, this would be a Nest thermostat, and it's got an API, and then I can use that API to put a mobile app together on my android phone, and go and deal with my thermostat, on my iPhone or whatever. So, the API's are the ones, that allow those little things in the internet - they can own an API - against which you can put, you know, whatever - and talk to them, so they're a product, and that's what makes them into a product is their API connection to the rest of the world. I find it interesting, that Carnegie Mellon, which is always been really big about - what do we have to think about in software engineering. And, in 2014 they came out with an article which is about software engineering for big data systems. And what they said is: big data applications are pushing the limits of software engineering. It's essential that the body of software architectural knowledge evolves to capture this advanced design knowledge for big data systems. That's where software engineering needs to be spending time. And if you're thinking about going... thinking about research, thinking about what you do in a university - are you thinking about the engineering necessary for really good, big data systems? They said that big data systems are inherently distributed - there's no option, their architectures there from us* explicitly handle partial failures, concurrency, consistency and replication. And, any kind of communications latencies that are sure to occur. It has to be handled in the whole concept of managing big data, they have to have architectures, that replicate data to ensure the availability in case of failure and they have to be able to design components that are stateless, because components and instances get killed all the time, that are replicated, and that are tolerant of failures of dependent services. So, we need to figure out how to move architectural engineering this way, not just for big data, but for sure, for big data. So, the last thing I want to talk about in this whole area of the cloud is resilience engineering, because now we have huge amounts of things, depending upon our software, like, for example, there were a couple of airlines, that went down in the US for like a whole day, in july, within two weeks of each other. One was Delta, and one was Southwest. And, they had a little glitch, so South West lost a router, and it took about for a day, and Delta lost an electrical switch, in Atlanta, and that's it's, you know, homebase, and it turns out that in order to fly an airplane, you have to file a report with the FAA - Federal Aviation Administration - and, so for any airplane to fly anywhere this report has to be filed, and the only computer that could do this, happened to be hooked up to that electrical switch that failed. So the entire system was knocked down, because one electrical switch in Atlanta, didn't allow the flight plans to be filed, so where are we going to know when this failures are gonna happen? - if we try to make everything a 100% failure free? We have to start thinking differently. We have to start thinking about antifragile instead. So, if you wanna see more about this, you can read Nicholas Taleb's book. But the whole concept is that there're that are fragile, like glass, and there's things that are robust like swords, and then there are things, that get better when they're attacked, like for example... a month ago I had a flu shot, every fall I have a flu shot. I put these bad things in my body so that it gets stronger and I don't sick much anymore ever since I started doing that. So, the whole idea of adding bad stuff to make stuff better if what antifragile is all about. So, if you look at the classical net* of Netflix, with their Simian Army - they do stuff all the time. From small: every two hours I'm gonna kill an instance, to I'm gonna add latencies, to their chaos guerilla, which takes down a whole Amazon region every so often, just to see, how everything responds. So, when you think about testing, you also have to think about that, Netflix thinks about, the real test is whether or not your system, your software can survive in production, with all these chaos monkeys and things like that. Killing instances, and making sure, that things still work. And, the ultimate test of good, is whether or not that's... that's the way... the ultimate test of any team's good is whether or not any of their services fail, when any of these guerillas come in and, or monkeys come in and kill them. So, to sum up software engineering in the cloud, and where I see it going, we got to move to federated architectures, there is just like gonna be an option. It's gonna be cheaper, better, way faster to deploy, and it's gonna be necessary, because you're gonna go to the cloud. We have to start thinking about the database, stop thinking about the database as the integrator anymore, and I'm still reading stuff in Mckinsey and Gartner about how the database and ERP system is the integrator. It's got to stop being the integrator, okay? That's like so last decade. We need to think about how we connect with API's across multiple different concepts of way we store data. We have all these stateful protocols, but when we start getting into anything that's gonna be out there, like the internet of things, we have to think stateless, because when you get on the cloud stuff dies all the time. And it will lose state. So what are you gonna do about it? If you're depending on it. We have to stop thinking about consecutive execution and think about stuff happening concurrently, and we don't actually know which race condition is going to win. I can remember that race conditions were one of the biggest things I had to deal with, when I was programming equipment. And you always thought about which one's gonna get there first - you don't know. So, instead of synchronous communication, like talking on a telephone, we have to figure out how to do asynchronous communication, like text messaging. Okay? The other person doesn't have to be there. Until they get around to answering your text. We have to figure out how to do event driven programming, as I said, to me that's like - natural. Cause it was the first way I thought, oh well, maybe the second one, or the third way... I thought about programming. But when you have big pieces of equipment it's kind of, very obvious. But I've been told it's really hard for people to think about event driven programming. Well it, you know - trust me - it's not so hard you just got to change, switch your mind a little bit. Instead of procedural programming, and we have to start thinking about... forget about defect free. Forget about. Because that means we're fragile. We have to think about when stuff happens, now resilient are we, how fast can we recover. How small can we limit the blast radius, when stuff goes down. Cause it's gonna go down. So this is like an artifact of the enterprise legacy stuff. And this is more like what you've got to do to be involved in the internet of things. And when we get this stuff working right, then we're gonna be much further into the future, than we were. Now, it's not the end of my hour, so I'm gonna continue on with something else. But it's about the same thing. So this is Germany. And I've been told, actually I've observed, that people like process in Germany. So, this whole - oh, how do we deal with all of this uncertainty? Becomes - what process can we apply, in order to get our hands on this. So I would like to talk about DevOps. This is a book that's not a month old. And I'm actually not sure you could get your hands on it yet either. I have got my hands on it, and I read through it, and I like it a lot. It's by Gene Kim, Jez Humble, Patrick Debois, John Willis. And it's about the three ways... Gene Kim wrote the Fenix Project, about DevOps, and this is sort of like the handbook behind it, lot's of how to's. And in it they emphasise* these three ways, the three ways are: flow, feedback, and experimentation in learning. So the other part of this talk is about flow, feedback and experimentation in learning. Because if you're gonna make this stuff work. You have to figure out how to make that process work. So there's one thing that we know for sure. Absolutely for sure. When we have a complex system, it doesn't matter if it's a weather system, or, you know a heating system, or [inaudible] system. Here's something that does not work. When you take a complex system and you smash it, you have no idea now it's gonna respond - guaranteed. All bet's are off, you don't know the inner connections, you don't know the unintended consequences, you don't know exactly what's going to happen. And, try as you might detest it* you can't figure it out. So, what does work? What works for complex systems is this. And there's an awful lot of experience showing us that this is the way to make sure we have secure, stable, complex systems, that don't crash. As we poke them, instead of smashing them. And we respond to whatever happens with that small, contained poke. So, because of that, sort of, the landmark thing that happened in 2010, is Dave Farrelly and Jez Humbles book on continuous delivery. And if you haven't read it, you should, and maybe you should be going to Dave Farrelly stock if you haven't heard it*. Because this has changed the way we think about software development process, it's just had an amazing impact, and this is pretty much the process you need to be thinking about, if you want good, stable systems. So, what is this all about? It's about making sure, that your process is driven from acceptance tests that are executable and that you write code to pass those tests. It's about making sure, that you have cross functional or full stacked teams, that include the full stack of people necessary to do that particular thing you're trying to do. Product has to be there, QA has to be there, ops has to be there, yeah a few devs too. It's about automated everything, build testing database migration deployment, you name it. It's about deploying to the trunk. It's about having no branching. How many here have branches? Okay, then you're not actually here yet, it's about not having branches. Wow. Now if you have 30, you get to 3, you know you've made a big step, but in the end you wanna have one single trunk, that you're continuously integrating to, and that software is always production ready, or you get it fixed right away, top priority. Sounds terrifying? That's where you need to go. And, you deploy all the time, constantly differing by domain, but deployment and release are not the same thing, you release by turning something on with a switch, you deploy all the time. The code does not have to be live, until you turn it on, but you deploy it constantly. So, it you think about that as of deployment pipeline, what you have, is something that's going to be faster it's gonna be safer, and it's gonna be better, than what you had before. And this needs to be the process you think very hard about heading towards. The second piece, remember - the three ways - flow - feedback, the next one is feedback. And I propose you need end to end feedback. You need, yes, to see what's going on in production with monitoring systems, but you also need to see, what's going on when the code is deployed, with your consumers, and get that fed back straight to the people, who are making decisions about what to do next. Who had better be on the team, because this is a fast process. So we don't have anymore outsource design. There's research to show, that the product manager is wrong at least half the time, and, I put the reference down here, you can get these slides and look it up. And, actually if I talk to project managers, they say - yeah actually, this is true. So, how do we not do everything, that we're asked to do, because clearly half the time it's just a guess, well, it's always just a guess and you could flip a coin to decide which half to take, just as easily. Or if you take a look at the spec that somebody is giving you, like a government body, or something like that, probably 2/3 of the features and functions in the specification are not necessary. Don't need to be there. Okay, and so it's just better to be lazy, and only do that 1/3 or less of the work, that actually is gonna provide value, than to do everything in the spec. So we should not be outsourcing design to somebody else, that tells our teams what to do. We need to move from delivery teams, to problem solving teams. So, delivery teams get instructions, take orders, you know, somebody gives them the... you give them a menu and they give you the order, and we have to move to problem solving teams, teams, that have everybody necessary to take a problem, not a solution, but here is what the consumers are trying to accomplish, here is the job that they need to be done, here's the metric we need to move in a different direction - see what you can do with it. And we need to have problem solving teams that address that kind of question, rather than teams that are told what to do. So, if you look at the experimentation and learning process, I promised you a process, here it is. You start with signals. Not requirements. You have a problem statement, not features. You wanna focus on what are the problems we're trying to solve, and problem statement isn't easy. You plan with hypothesis, not estimates. It's not clear to me anymore, if you have problem solving teams, what good estimates do anybody. You do have to know, that you can solve the problem, within the constraints, and, given that, the rest should be up to the team. Here's my constraints, here is my problem, if you can handle it, within those constraints, usually of time and money, then, go ahead and do it your way. And, so we don't want a backlog of stories, we want multiple experiments we're gonna run against the hypotheses, and we don't want guesses, we want analysis and conclusion. So this is the standard, I don't know, scientific method, the engineering process, and if this process does not underlie the process that you use today, than you should be asking yourself: why not? Because in our world, with our uncertainty, this is the only process that works. Now, scientists have been using this process for hundreds of years, it's actually not such a bad process, but, it's time we got there. But one of the hard parts about this process, is the first part: what problem are we gonna solve, and what kinds of hypotheses are rational hypotheses to take out? So right at the end here, I'm just going to take a couple of minutes, to talk about a process that I've run into recently. Certainly, not the only one out there. But one that I find pretty interesting. And it's called the Google Design Sprint. It was introduced in 2014 at the Google IO conference, interesting conference about... it's more a design conference than it is a development conference, and it's a process of figuring out how to prototype and test any idea in five days. Okay. Which means, before you actually code. It's described pretty well, in this... oh let me go through it, it starts on the first day with a "everybody get on the same page": how do we do stuff? what's the problem we're trying to solve? Here are the issues, here are the constraints, and then on the second day there's something i'll talk about in just a little bit more, but it's about each individual sketching out quietly, their own ideas about how to do it, and then sharing under the guidance of a designer, because it's a standard design process, and then sharing those ideas on paper, so that we're not dealing with brainstorming that has been proven to... brainstorming has been proven to make sure, that the people who are loudest, or most in command of language, or whatever else it is, or the most like everybody else... the ones who get their ideas heard*. So, instead we want a process that allows different ideas from a quieter people, and from the people who are not so comfortable talking - to get their ideas too. And then there's decide which ones of these ideas, and, there i'm going to talk about not voting. Because voting again, suppresses minority ideas. And then you do a very quick prototype. You've got some designers there, you put together something really quick, it's not code. It's maybe keynote or, you know, PowerPoint, or something like that. And then on the fifth day you have a few users come in and actually test out the idea, with an interviewer walking them through some stuff, rest of the team watching from a remote screen, seeing how things go, and finding out how that idea plays with at least a few people. So that's the general concept here, and here it is - a book on it, where you can get a lot more information. And, I'm just gonna do a quick video of a summary of it. [VIDEO BEGINS] In business, development time, is a precious resource, why are you wasting it? To be more efficient and responsive, Google Ventures created the design sprint, a process, for answering critical business questions, in just five days. The first day of the sprint if for sharing information between departments, and creating a simple user journey. Day two, is sketch day where the team works individually, to generate, creative solutions to the problem. On wednesday, the team looks at all the solutions and decides on the best approach. Create a visual story board, and you're ready for prototype thursday. It's time to get productive, and build your prototype. On the final day, you test your idea on real people, to see if it has any value. If it does, high fives! You're onto something great, if it doesn't, well, you've learned something. And the cost is only five days. So stop wasting your time, and compress the endless debate cycle into a single design sprint week. [VIDEO ENDS] So, two things about the design sprint I'd like to quickly point out here - if you find it fascinating - and one is: the idea generation on the second day is really interesting, because it's having individuals, not compete with each other to get their ideas heard right from the beginning. But allowing those fragile ideas to be developed by individuals slowly, until it comes to some detailed thing that they post, and it's reviewed by the group, without even knowing whose ideas are which. Because, there's a thing called conformity bias, one of Linda's biases, yes, Linda? And, it's about that idea, that people who think they are in the... [inaudible] in the minority will self silence. Or change their opinion. And then that makes it even harder for other people who have minority opinions to have opinions. So the idea, is to find a way for people to get their ideas exposed before they have to worry about if they're in the majority or in the minority. And in a way, that doesn't require them to be the loudest person, or the most, you know, forward person. It's okay if they're a little bit different than the rest of group. But I don't agree with the way that they talk about doing the third day, because then people vote. And I'm not actually so fond of voting, you might figure out why. But at 3M, where I worked for 20 years, we didn't vote on product ideas, we attracted people to product ideas, so we explored a lot of ideas, which you should be too, because in the whole [inaudible] of idea... Don't settle on one idea, try a whole lot of them. And make sure, they are from the whole design space, including some outliers. Which means if you vote, the outlines sure ain't going to make it. And so you wanna pursue a variety of ideas, and if she has an idea and can attract a couple of other people, to try her idea out - why not let them? They don't have to be in the majority, they can just be a few people that try their ideas, that's how almost all of our really innovative product teams worked. If a champion could attract a few other people, the idea got explored until either it didn't work out, or it did work out. You gradually narrow the ideas, to the ones, that are going to work, and you wanna maintain multiple options for as long as possible, before you settle on something. Don't just choose one thing and go with it. So, to sum up the future of software engineering. First of all, there is going to be a big technical advance, that's there, we can see it now: it's about scaling up, it's about infrastructure as code, it's about that whole new technology stack, that's sitting in the cloud and having a different way for us to think about programming and think about security, and think about reliability and resilience. And the there's the three ways of DevOps: sort of expanded to make sure we think about the whole product, not just dev and ops, but the whole thing. So we want to think about how do we get a continuous deployment pipeline, now do we put together full-stack problem solving teams. Okay, and make sure that we're doing experimentation and learning, all the way through our development cycle. So, the future is here actually. It's just not easy to see, not quite so evenly distributed. But, if you look around, you're gonna find futures, you know, even in our sessions throughout the rest of the day. So thank you very much.
Info
Channel: GOTO Conferences
Views: 77,596
Rating: 4.854775 out of 5
Keywords: GOTO, GOTOcon, GOTO Conference, GOTO (Software Conference), Videos for Developers, Computer Science, GOTOber, GOTO Berlin, Mary Poppendieck, Software Engineering, Software Industry, Software Development, Engineering, Programming, Coding
Id: 6K4ljFZWgW8
Channel Id: undefined
Length: 57min 6sec (3426 seconds)
Published: Sat Nov 26 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.