The State of the Art in Microservices by Adrian Cockcroft

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is a talk I gave a Dokic on Europe it was done in the morning before they announced all that swarm stuff and all those things so it doesn't refer France to that but but it what I tried to do was was one of the opening keynote for the conference so I tried to sort of give something that were sort of known as opening keynote e things I'll set the stage sort of where are we coming from what does it look like so I'm really talking about three things which is I've been talking for a long time about speeding up development so just briefly talk about that which is through the reason why Dockers interesting one of the reasons is that it speeds up development and then look at micro service architectures what's going on a few examples a few sort of category leading examples of different ways of doing this and then some some things sort of trying to figure out what might happen next this is the cloud adoption curve it's also the adoption curve for any enterprise IT technology in mobile and docker and whatever so you've got me ignore it for a while then say no then say I said no dammit no no no in an oh crap and then okay we guess we got to do it Simon wordly came up with this so Netflix was way ahead of the curve doing cloud in 2009 and this is my Twitter icon somebody called me a unicorn so I somebody came up with a cloudy corn t-shirt which is a cloud unicorn so I have that as my Twitter icon store and what happened about a year ago is I left Netflix I mean the rest of the world was doing stuff and then enterprise IT was actually the beginning of last year really just trying to figure out cloud so I went across and what joined Battery Ventures so one of the VC firms it's you know sort of a top 20 it's one it's one of the sort of top tier TVC firms but not right at the top and sort of somewhere near there been around about 30 years there was the headquartered in Boston and we have a Silicon Valley office and we're about to open a San Francisco office down near the it so vaguely into the Caltrain station in that part of town later this month I think we actually open that office so we have an office in Israel as well so we sort of have a global approach to doing things and we've been around for a while we've got big funds we'd enter a lot more mid to late stage things a bit a little bit of early stage so hung is fun fun time because lots of enterprises are trying to figure out this stuff and I spend a lot of time talking to these big enterprise customers and trying to help them understand what's going on and then docker came along last year and it wasn't on anyone's roadmap for 2014 you go back a year look at the predictions for the year there is no mention of darker except maybe one or two people the norm was paying attention to and and then you look at this year's what's going to happen in 2015 and if anyone doesn't mention docker or they're not paying attention right so it suddenly is everywhere and I think it wasn't on anyone's budget for 2014 and maybe it's on the budgets for 2015 so maybe that actually makes some money this year too so that we cool so that's an interesting development it arrived very very quickly and this it's actually pretty much a case study of how to build a totally viral developer driven product launch like go study that if you're trying to do a developer product trying to get globally accepted it's just go see what did they do how did they do it why did it work it's a fascinating thing to study and I think there's been very few products in the history in recent history or even long time history that have taken to have taken the industry that quickly and just got everywhere that quickly so it's quite fascinating so from a venture capital point of view we now have a whole industry being disrupted no one had anything in their plans it's everything's a big mess no one knows going to happen next month let alone next year perfect time to try and figure things out and try and you know if you do make the right bets maybe you'll make some money so we're all over this space trying to work out which parts of the Dockery's ecosystem to invest in so that's why why I'm sort of interested in this so let's talk a bit about product development processes what do they look like people are trying to do continuous delivery and you're trying to get around this circle quickly you're trying to observe what to do orient yourself decide what to do and then act and the observed part is really what most companies call innovation you know if you hear a big company saying we don't how to innovate it means they can't they can't figure out what to do right what most obvious thing is right in front of them they cannot decide to do right so you measure customers you find that there's some land grab opportunity I mean Netflix is literally a land grab they are taking over the world one country at a time or several countries at a time as of last last year they did like five six countries and one go you see a competitor make a move you want to respond to it or you just observe some customer pain you've got some kind of maybe the signup flow is probably the best example how many people visit your signup page versus number of people that complete sign up you know classical you know you've got a you've got a funnel how do you optimize that funnel and take the pain out of it so more customers sign up so maybe your we are working on that so that's the observation part next thing you do is gather data from logs you look at data that no one has ever looked at before you figure stuff out you do some analysis you have these hypothesis you model them it says really nowadays called big data which means you're you're trying to answer questions that no one has ever asked before looking at data that no one has ever looked at before which is why the unstructured sort of log processing stuff matters here you're not trying to sort of do the kind of query that business intelligence always used to do which is that how much money did we make last week right that needs to be very very accurately measured and you're going to ask it every single week and if it's wrong then your your your roll up at the end of the year is wrong and things you know that's a problem right this is the different type of queries as much more unstructured decision support and it isn't usually based on the data that's already in your data warehouse you have to go rummage around in logs that's that's why sort of big data becomes an interesting thing so the next thing that slows you down typically is your corporate culture how fast can you decide to do something if you can plan a response share with everyone else for what you were doing and just do it that's great right if you have to go and ask for your VP or CEO permission to do everything you're going to get to slow so the culture matters and one of the big points that Netflix made a big big fuss about is like we have a culture at Netflix that lets people innovate really really fast and makes things happen really quickly finally what cloud lets you do is deploy stuff quickly you don't have to go and file a ticket and wait for a month to get a VM right you just deploy stuff you put it in the cloud you call something a few minutes later it's done with docker it's you pre you already got the machine it takes a few seconds to do stuff I'll get onto that later why that's interesting so you typically doing incremental feature releases you know a day's worth of code is what you put into production whatever you did that day yeah that code you should ship it in production you should not nightly build up months worth of code before you ship it there's automatic deployments things are typically behind AP tests or feature enabled so the code you're putting into production only gets seen by a customer when it's ready and it's been tested but it's in production as you're building it up so a lot of testing in prod them so this is what the cycle looks like but you're not just going one way around the cycle you're measuring customers and you're going sort of back and forth in this but the the point of modern development continuous delivery very agile environments is that you are the speed at which you go around this is the speed at which you learn about your customers and you learn about the market and you learn what works and what doesn't work and if you can do that you know 10 times faster than your competitors or some in many cases a hundred times faster than somebody doing sort of quarterly release waterfall and you're doing daily release that's a roughly 100 times faster you're learning stuff a hundred times more quickly and you're wasting less and you go faster and you just get more competitive so this is why this is interesting and this is driving a lot of big enterprises are actually starting to adopt this kind of model and they're doing continuous delivery they're doing DevOps and they're transforming their companies if you if you want to see more on this the DevOps Enterprise Summit there's a whole bunch of videos from last year go watch those videos the Nordstrom video is probably the best one to start with no storms a big old retailer right they are doing DevOps they're doing continuous delivery and very agile way department of homeland security is probably the most fun one to watch they're running Netflix style chaos monkeys and production is that I buy jaw drops is that what the hell this is these are the people that suppose they're supposed to take you six months to give you a green card and what are they doing running cows monkeys anyway that seriously that was jaw-dropping to watch those gates so they're the big companies are desperately trying to figure out how to run as fast as the web scale here startups and it's fun to watch sometime between better than others so then you've got this environment in most big companies where you have these silos you've got product management group and you've got a user interface design group in your development QA DBA s and these are all silos and the way you get something done is you have meetings between these different groups and you file tickets between the groups and you know months later that one change you wanted finally hits production so in order to speed that up companies aware we're going to pretend you know we're a big company we're pretend we have a little startup on the site we have a bunch of people we sent them off they can sit in different building and do anything they want and that so we'll have this startup on the side model and so you get these sort of monolithic product teams with a product manager developers and everything on the site quite often this is actually the mobile team because they couldn't figure out how to do mobile then it was just bizarre it's a weird language what's this objective-c thing anyway and this weird of job dialect of Java and so they went off and they got a mobile product manager and they got some mobile developers and there is no upside to the deployment it stick it in the App Store so ok you just go off and do that so you have this team that owns everything about delivering a mobile product and then you go why are they delivering stuff every few days you know what's going on here and and then you finally realize that the entire company should be structured that way and the way you had the silos are 90 degrees off so you actually want to organize your company with product teams based on micro services that is sort of do a 90 degree shift and organize your team your management organization is based on you this functional thing is broken into a bunch of micro services that you own and deliver and you have api's between all the teams there's a famous memo from from Jeff Bezos to the whole of Amazon saying everyone will have api's to everyone else Amazon if you don't comply you're going to be fired right there was nine years ago or something that Netflix sort of ended up organizing itself like this and more and more people are doing this real but at the backend then you don't have everyone doing their own type of delivery you want to have some commonality there so really you want to have a platform so you want to build a platform team and that platform team can be outside your company it can be AWS as a platform and maybe some layers of software over it or it could be something you do in-house so don't really care whether you run Rackspace or AWS or or or OpenStack or whatever the point is that these teams are talking to that team through an API you don't file tickets you don't have meetings to deploy stuff you still have meetings to discuss what you want and things that went wrong and how to make it better but a deployment is an API call it's a series of API calls and that's the key thing everything is now automatable and everything can be built with tooling and that's when you really start building a platform so what used to be your ops team now becomes your platform team and they're mostly developers and they automate everything and and they don't do so much firefighting they're not responsible for what your applications doing they're just responsible for making those API is available so there's some much much clearer separation of concerns here but it's a horizontal thing and one of the side effects of this is that doing DevOps takes about nine months because about two or three months into the process you have to do the reorg and it takes probably about six months to recover from doing the real before you really have figured everything out so think of DevOps as a reorg not I bought chef although battery ventures is invested in chef we'd like you to buy chef at some point we're very happy that you like to buy chef but but it's the point is that DevOps is not a tooling problem it's an it's a cultural organizational problem that you have to really merge your dev teams your options into one team and have them work for the same management and have them be embedded and they're responsible for a piece of the site but they have a common set of platform things that they do so it's a there are common patterns the way I think about a platform is it's a set of patterns that everyone's buying into that makes stuff easy to do and the sort of Netflix style approach it isn't a totally lockdown you must use this pattern you can do other stuff as well it's just you're on your own net it's a little harder to start something new and you have to build your own tooling around it so what does this look like Netflix was one big monolithic Java app or you know you could imagine a big PHP app like there's a bunch of those around and you have a release plan and every two weeks everywhere all the developers generate a bunch of code and then it goes to QA and they really integrate it and then it take it to operations and they figure out a put in production fairly common this is the way you should typically start if you've got a handful of developers this is the most efficient way to get something built everyone's using the same tooling it all works fine the problem is as the number of developers grows once you've got 10 or 20 or 50 or 100 developers this becomes a problem because you get a bug and QA find a bug and they talk to that one developer and the 99 other developers work is blocked it cannot reach production because you've got to fix it this release doesn't roll out until that bug is fixed and it's even hard to figure out which developer broke it quite often because the monolith is broken call all the developers figure out amongst yourselves who broke it right we used to have emails every other week it's every two weeks at Netflix every on a Friday who broke the bill everyone you know stopped all your work go figure it out and it was a huge waste of time and then you get all the way to ops they roll out the code it broke so they roll it back and they have another bug and you go back and you figure out which developer broke the build and production and you go back and meanwhile all this code that's been developed by all these people is blocked from getting to end-users where you learn something and that's the problem and this was this was this was the situation in 2007 when I joined Netflix and about 2009 we were running this and this was running the DVD business on Netflix one big monolithic app large pretty much so we've tried to figure out well how can we break this up so that developers are not blocked and we came up with this model where there are lots of program managers project product managers each with their own release plan working with different teams of developers deploying potentially in different languages independently with api's across teams and it's the development cycles are no longer linked and then you go well I need to get the stuff in production and everyone's doing different stuff so that's a big mess I've got these different deployment pipelines what I really need is a way to standardize the the release of that and what Netflix did in o.9 10 was they said just be anything you can bake into an AMA and machine image we can deploy so the platform that Netflix built back in oh 9 was I'm just a machine image don't care if it's got Java or Python or Perl or anything in it C++ we know how to deploy that image to production and order scale it and run it in production right so nowadays we have a better idea which is that let's use docker to do that right so you have docker as the container I don't care what's in the container it's the same idea it's just that instead of taking minutes to build a container and minutes to deploy it as an instance it takes seconds to build it in seconds or less than a second to deploy and now you find a bug in one of these things and you go well I'm just going to fix that one thing I didn't block anyone else and this this is the this is really the poor thing that is driving people to micro services it's the unit of deploy when you've got too many developers contributing to it gets too unwieldy you have to break it up maybe you spend a bit of extra time building api's and you have to build stable api's between teams but there again that go back to that business memo right this is what Amazon did they've got very effective at doing it since the way Netflix works it's very effective and other teams are figuring out how to do this but you get this great innovation now because one of these teams is said you know I want to go play with go hey that's cool I like go by the way if you need to hire a go programmer you don't have to hire go programmers you hire a Java programmer and then you point them at go after a couple of days they've got it figured out by the end of the week they don't want to write Java anymore and seriously I've heard that I've heard that from a few people and I bounce sort of feels like yeah that works so you don't have to hire you just make go programmers it is I mean it's not as functional as Java but it's actually a lot more fun to use and easier to learn anyway or you could always kind of decide you're going to go beyond Java and just do everything in Skylar instead or Haskell or something but you know whatever but then you can take this a bit further because instead of you know depart you've got all these standard things but maybe some of your containers are all the same all the time so you know I've got let's go back here remember say I went back I've got standard nginx front end I've got a standard radius level I've got standard I've got stuff which is off the shelf I don't need to build that there's one in the docker hub I'll just download it and run it maybe there's a little bit of configuration I layer on top but that's that's trivial so now I've got the standardized instances and you know awful lot of systems you know a large proportion of them are just standardized instances I have caches I have webs front ends it's just the same stuff I don't need to change that and everyone's using the same one so they get incredibly well tested so you've got a very very robust well tested Redis instance because that they know how to build it and nginx you go check the certificate yes nginx the company owns this instance and they deploy it and everyone uses that one or enough people use it that it's a safe instance or deploy so that becomes an interesting way of doing it the rest of the system is basically the same you're deploying a piece but the proportion of the system you need to build is reducing because there's so much off-the-shelf stuff when it goes sort of orchestrate all the stuff together perhaps becomes an interesting problem but but that's why I think the docker hub is particularly interesting so then what's what's going on here so there's one of the companies I work with Vivid cortex they have a my sequel monitoring tool that's the built-in go it's a SAS application they have an API server it's a pretty large complicated go application how long does it take to build that from source code yeah 400 milliseconds that's how long it takes for go to go from a big pile of source code to a compiled binary that you can deploy that's a very substantial piece of code right yeah mostly it's hard to measure but 400 milliseconds is like this this longest I've seen yet from anyone building some go program okay so can my compiled build time is less than a second now I need to build that into a docker container and the you know the first time it takes a while to assemble a bit second time you do it is less than a second okay now I want to run it on my laptop so I launch it in my little container server on my laptop it takes less than a second okay now I need to run it in tests well I need to copy those bits to the BI test machine which takes me or whatever a sec less than a second oh no no that works okay let's put it in production it takes less than us it's ridiculously quick right in let you don't have time to go for coffee anymore so it's a big problem so we have to break build coffee breaks into our workflow instead of going away when you do a build or something right but once you give developers something that happens that fat as fast as they can think it's addictive right so you get this is if you give them this system and say you know it's possible to do stuff in seconds why am i taking minutes or hours to get stuff done anymore just it's a total waste of my time why why bother so this is one of the reasons doc is addictive is because everything happens so fast that's productive that speeds up the development process if you can figure out how to test your thing in a few seconds you could have a complete deployment cycle in you know ridiculously quickly ok so what's happening here is that we keep reducing the cost and the risk and the size of change we're doing smaller and smaller changes we're doing them very very quickly very low cost and it ends up very low risk because we're making such incremental changes that we're only sort of changing a few lines of code each time it's kind of and it's easy to back out of it and so the rate of change is increased and then you learn more and then you get more competitive so this is really what's driving this revolution so this is disruptive if you're doing continuous delivery we containerize micro servers as that is a disruptive thing to a pika to be doing if your competitors aren't doing it and so it becomes eventually table stakes you've got to be doing this or you'll be left behind so I keep saying micro services I'm going to try and define what they are this is my definition loosely coupled service-oriented architecture we're bound to context now service-oriented architectures I'm going around for a long time to say where's just SOA right well yeah if you like writing XML and sopin and whistle and all those things I mean the SOA thing got bogged down and a whole lot of very heavyweight stuff what we're really talking about here is lightweight micro services and making sure they really are loosely coupled and you can tell they're loosely coupled because if you have to update all your services at the same time you aren't loosely coupled if you can really update everything independently you've got the loose coupling right and one of the there's a number of things that can couple you that your organizational couple if you have to coordinate across too many too many parts of the organization to build a service and get it running that's too much coupling if you've got one database schema that everyone depends upon and you have to change the schema then you have to change all the services at the same time and hope nobody notice whether you were down for a few minutes that that's not good right so you end up with this denormalized data model we have lots of data stores where each data store is its own like table or materialized view effectively and they're all sort of and out of synchronization they're not continued that there is no consistency anymore but when you have distributed systems you don't really have consistency anyway it's sort of Ellora physics problem or cap theorem depending on your point of view but basically that that's the problem you have to figure out other ways of keeping stuff in sync and then what's this bounded context phrase what does that come from so there's a book almost ten years ago called domain driven design by Eric Evans one of the fun things was I actually put this slide up at a Mantova fest last September in Portland Maine Eric lives in Portland Maine and he came to the conference he was sitting in the front row and I said and here's Eric and no one else in the room had figured out that was Eric up to that point or that was cool it's a very good book that the one caution with the book it's a bit dry for the first two thirds you have to kind of work your way through it the last third of the book is where the real fun stuff is so yeah then you know work your way through read the whole book it's very important but it's a great book and it's ten years old now and actually I think there's an se radio podcast where they interviewed him with like a ten year retrospective on domain-driven design but the whole point of it of a bounded context is that how much do you need to know to be productive about a thing if it's a micro service and it's bounded it's like well it's got standard API is that connect to everything else right how much do I need to know about Google Maps to use the Google Maps API I don't need to intimately know everyone on that team and how Google Maps works underneath now I just need to know the API it's a stable API I can import maps I can import my Foursquare check-ins and I can combine the two and build a mobile app that says I'm here and I checked in right so make that the way that all of your teams work so each microservice team is trying to build an externalize herbal API that is stable and is somewhat self-contained and sort of the guts of what happens in inside doesn't matter and then when you get a developer working on one of these teams all they need to know is don't break the API and just make it better inside maybe incrementally add something to the API but you the amount you need to know to get up to speed on a micro service is much less than if you need to know how the whole system works right so that that's part of it that's part of the bounding and part of it there's a whole bunch of things about having a well-defined language for describing things and things like that the other thing is a common problem with micro services well how do you decompose a big system into micro services yeah what is the right way to do that there's lots of different options there's a couple of months or so that one is that management it's really the job of management to break big problems into smaller problems and give them to individual engineers to solve right so that's a standard management problem that everyone has and if you're a good manager you take a big problem and you break it down that that's normal but the other thing is there's an entire book called domain driven design that will tell you how to do that and I'm going to tell you how to do that in this presentation go read the book write that a lot of it is about how to build these bounded context how to compose things out of them it's a hard problem but there's help already exist how to do it all right so these are the kind of things that can couple you like I said organizational coupling look Conway's law says that the code will end up resembling the structure of the organization that built it so what you should do is build an organization that resembles the code structure you want built so invert that problem so you layout your organization in groups in each group will build microservices and they have api's between them so that means that in the end it's all lined up the other thing is like Enterprise Service bus a--'s are horrible things because everyone gets locked into one set of standard things and you can't innovate around it so you can get you can get overly coupled to common message formats and things and it's much better in my opinion to have point-to-point messages than to have a common message bus message buses tend to also suffer from split brain and sort of problems like that the cap theorem basically bites you which are mentioned earlier so it's very hard to a most message buses have a consistent view of the world and consistency is very hard to make available if you're distributed so that's a problem Netflix doesn't use message bus as much it's lots of point-to-point Talde for that reason and then versioning you have to be figure to figure out how to make your versioning flexible and have multiple versions running at the same time in production and have the routing find the right version and have forward compatible versions and it's a little bit more work but it's worth it when you do it alright so let's look at speeding things up let's say you're in you're an old school data center guy and you buy machines and you keep them for three years and they keep the same IP address and they're still running Java or three years later that's a data center snowflake and maybe takes you a few months deploy it right that's the state of the world that a lot of people live in but then you virtualized it you've got your VMware stuff and you're really happy because now you can deploy it in minutes and machines maybe live for weeks then you can redeploy it to be something else for the next build and you know well the life's a bit better and a lot of the world now runs in this kind of environment where a few weeks is typical for lifetime of a machine although some companies still take weeks just to get a machine at all because they're you know you file a ticket and wait a month now we've got containers where it takes seconds and it's if it only takes seconds it's worth just turning stuff off you know it's worth having a machine that lives for minutes or hours there's a what they call there's a that you could basically mean there's 168 hours in a week how many hours a week are you working 50 or 60 maybe if you're really like dedicated so there's an extra 100 years two-thirds of the time you're not there so why are your machines running you should shut down all your tests and dev environment for two-thirds or three-quarters of the time yeah perfectly reasonable thing and then if it's running on on a cloud you stop paying for it so you just say I just saved you two-thirds of your your cloud bill right so the company goes shippable calm that sort of does that they run docker containers for doing all your tests and dev environment and they consolidate all your containers so you run more of them per machine and they shot stuff down when you're not using it and between that they say they easily save 70% of your test that bill so that's a obvious use of containers like an empty slot on the right here so what is the ton of next extension this is not really related to docker is that AWS came up with this lambda thing and if you look at that that is a container that they fire up with a no js' function in it it runs once and then they shut it down and they charge you for every 100 milliseconds you're running and there's not charging by the hour they're charging by the 100 milliseconds and they give you a million requests a month for free every month not just like if you're a startup right just every month you get you get another million for free so you can build like a pretty large scale home IOT system with this for nothing it was nothing to run right so it's just fun I was trying to play around with that for a while the maximum lifetime of these things is currently set at 3 seconds so after 3 seconds they'll kill it if you send it enough requests it will actually still be there for the next request so it gets a bit more efficient if you send it enough requests and then after a while you should just have a permanent machine running note right but but further yeah but it takes a while to get there so this is it's just like the logical extension of container like if you can really create containers in milliseconds then you can just run it per request and shut it down again so that's it's interesting to see that happening as a kind of an extension to this whole thing and it's not built with docker but you could sort of build something like that out of docker fairly simply you need an event queue and some deployment management stuff Nitra they just a couple of days ago they released lambda is it's still alpha but it's now available for anybody that it was in limited release preview but now it's in general release but it's still alpha so anyone can go play with it and you know ice like stared at the node manuals for long enough to make it do something it was ok if you're if you come code stuff in JavaScript you're kind of there they're looking at adding more languages to it and go was on the on the list of ones they were looking at hopefully they were thinking about it anyway so this is interesting because the architectures now have this sort of euro in the right-hand side of this you can do stuff really really quickly so what's the state of the art that these are some of the I rummaged around I've talked to a lot of poor these are the architectures that I've seen so Netflix OSS worked on this for a long time there's over 50 projects now they're still doing me that's still releasing new things there's newest versions of stuff coming I know about that they've docker eyes all of the Netflix OSS tools so you can now fire them up on your laptop and try stuff out so that's good that's called 0-2 docker I think they have it they have a little thing on it so you can visit the Netflix site and hear about that so some various presentations about that guilt are interesting it's a sort of version of the Twitter architecture Twitter built some really interesting stuff largely based on Scala but it's data center oriented and then gilt took that and did like a cloud-based version of that it's a little bit more dynamic and then they'd authorize that so the guilt thing is a docker based Scala system running on Lazio on AWS but then halo is interesting they built their own it's a taxi company is thinking about uber in London is best way of thinking about it but it's built in go so they rebuilt all of their systems so they built a a micro services architecture based on everything in go so they have a go based system and Groupon and Walmart and a few other people like that are doing odd stuff in node so these are sort of different flavors of of what's going on so what are you trying to do if you're building a micro Service architect you've got all these different pieces you've got some tooling for figuring out how to build and deploy you've got some configuration stuff things have to discover each other they have to send traffic to each other and you have to be able to see what's going on so those are the top boxes under that you have some data stores and then you've got to potentially up orchestrate and deploy this stuff and then there's some particular set of languages and container technology the particular micro service platforms doing so that's this is sort of the template for any any sort of micro service architecture if we look at Netflix they have a bunch of weird names that know most people can't spell but because engineers invent the names at Netflix so that's what you get so as guard for deployment emanate of a building am i's editor in our case for tracking the configuration and doing dynamic changes to it Eureka is the discovery front is sort of a discovery client denominators a DNS management layer as all is a API routing tier API proxy they've been doing a bunch of work with Nettie for a ribbon tirado which is their into inter process system for routing traffic and then for observability hysterics shows you circuit breakers and what's broken by theists for building dashboards itself is a logging system that they haven't quite finished open sourcing yet and that's running on top of a bunch of ephemeral data stores and this is interesting because you can if it's ephemeral you can easily docker eyes it you just stuff Cassandra into a container and if you blows away you blow away the data that was with it it doesn't matter the data is ephemeral in the way netflix runs Cassandra and then memcache T and there's a bunch of other tools there they orchestrate mostly creating new things as Manuel rolling things out they have some processes that roll updates out across the world I think that stuff is likely to be open source fairly soon but it's not out there quite yet and then they have all these different languages mostly Java based but a groovy Scala closure and then they've got some Python and a bit of node mostly with AMI is in production a bit of docker for playing with it so that's that's kind of a complete architecture and its really focused on this idea of being able to distribute things globally and build a very high vailable highly scale architecture and there's a bunch of people using it all kinds of interesting people there Suncorp is a bank by the way that's in Australia got Peters over sitting over there so this is the architecture diagram we ran out we gave up trying to draw it basically everything talks for everything else and there's hundreds of them and you can't draw it on a slide anymore okay let Manas Twitter look like well they've open sourced some stuff they have this configuration decider but the interesting things they have a finagle and zookeeper and and Zipkin to finagle and Zipkin are the key pieces finagles our discovery and routing layer zip concern observability tor very powerful they have their own Cassandra Lake data store which I think they call Manhattan and then they use a version of missus with Aurora's there's the scheduler for deploying things and it's mostly scarlet on a JVM container so that's a interesting architecture and people here from Twitter yes we're in town ones no one here right now so it's worth asking and they're building a very efficient data data center deployment at scale that's kind of what they were trying to do and that's their architecture diagram which looks like a pretty of it it's the d3 version of the Netflix one you still can't read it gilt took that stuff and added a few more things they have some tooling which iron cannon forget which what that does and a bunch of other things for building the system there again they have finagle and Zipkin but they've gone into the Acker sort of scala echo version of things is more active framework they have a bunch of different data stores they've got some Voldemort in there for some reason very few people using that nowadays but anyway there's some and postgrads deploying on Amazon and it's a mixture of Skylar and Ruby using docker as the containers for delivering it for the business logic but underneath that it's largely statically defined data stores they're not using docker for their data layer or at this point and they're really optimizing for fast development and very agile so so they're a if you know guilt they're a flash sale site so mid day every day they launch new stuff so they get a big pile of traffic and mid day every day and then it sort of dies off of it very spiky workload that's their architecture diagram which is I guess a force directed graph version of the everything diagram hello again of a go platform load a rabid MQ and though using some messaging although they were having a few problems with that which I wasn't surprised by given my experiences with queues and and it's think they're good places to use queues but don't use queues for everything because you'll run into problems as they when you have failure modes there's a lot of corner cases in the failure modes that get in the way and they've got a nice request tray system a lot of Cassandra based data store and they're using go with docker as their kind of deployment model and that's their architecture diagram which looks rather similar to the Twitter one because they're using d3 again and then no there was a few different versions of this Groupon and Walmart mostly using bunch of different micro services there's a thing called seneca Jaso org which comes from africa where they call near near form the in ireland they built map and that's a nice easy interface JavaScript and then amazon lamda is a sort of sort of in the same space here so there's a number of different ways you can build these things and i've been playing around i built a prototype that the codes on github i was trying to simulate large-scale micro service architectures i haven't quite finished doing it mostly because it spent too much time writing presentations in keynote and i should be spending more time writing coding go but i'm trying to simulate large-scale micro service architectures with this go thing called speak o simulate process interactions and go protocols and go and then I've got some visualization I've been playing with trying to learn d3 and JavaScript so trying to hook those things together so that's sort of eventually in the next few months I'm hoping to get something built that if somebody's desperately interested I'd be very happy to have a few pull requests on this so far I keep saying that no one's helped you could be the first person to get in there and really do something other than they tell me I formatted my go wrong that's the only poor request I've had so far I forgot to run go format ones so when you're doing web scale there's these characteristics are that your brand new services are relatively infrequent things means a new project started but you're deploying new versions of those things over and over and over again so you need you don't need general-purpose orchestration that can deploy everything you very rarely deploy all of the things that make up Netflix into a new data center it happens sort of once in 2013 when they went east and west coast they had to kind of stand up everything in Oregon that was the same as as West Virginia but that's one time right and but we're using hundreds of micro services everything is customized it's that the orchestration is all about getting a new version out right for hundreds of developers all the time and there's lots of nice orchestration they built to do that but it's it's it's a different kind of problem than the general-purpose orchestration so what's coming next well one of the things is if you look at a docker hub and you download you know Redis great buts that's helpful but if you want to download an entire application let's say I'm an Internet of Things supplier and I need a back-end and maybe in the future sometime there's an Internet of Things app on the get on some future version of the docker hub sort of sort of go with me here pretend that it's kind of like the Apple apps iOS App Store you click a button and it's going to charge you $10 an hour or something while you run this thing and what it's going to install is sort of a API front-end for collecting data from your internet of things written in node or something and then it's going to store that in some cassandra or react or whatever back-end and then it's going to have a an analytics thing with a bunch of you know Hadoop and spark or something and a mobile backend that can you can that your customers can then see their things right those are the that's a generic Internet of Things bundle right let's assume that's one click right how do you deploy a but it because it's all standard components and a few bits of glue you could install that in your own premises on Amazon or Rackspace on Google and as your anywhere right it doesn't matter because it's a so now potentially we end up with a a single global App Store with interestingly complicated applications but what does it take from saying I want this there like installing one piece of code on your phone right this is probably tens of Auto scaled you know in containers need to be orchestrated and connected up in the right way to make up that application so that's a different kind of orchestration you could kind of code it up and figure something to be to lay it all out but that's the interesting problem that you could be build those kinds of things and May and then maybe docker could make some money off you know like Apple makes money off the App Store so that's a potential way for you know in-app purchases and things like that so this is my kind of future prediction for one play and it mayn't we may never get there but it's at least one place that we may end up so you need orchestration that can orchestrate installation of a complete thing that in a place where it's never been before with all the logging and automation you know all the things you'd need around that tens of microservices seems fairly typical it's probably not ten mm like Netflix is but it's sort of tens of things and you could end up with an enterprise app store I mean they're already places that are sort of app stores and your VMware has one and Amazon has one and chef has one and Red Hat has one everyone has one and nobody there's really nothing in any of them and there are a few things but if you have one central place that goes across all those platforms it becomes much more powerful so that the idea of having one central app store if you remember when in the old days when you had like a verizon blackberry and there was the Verizon App Store with like three crappy apps you didn't want to install in it so that's where we are now right we haven't got to the point where there's the iOS and Android App Store with more stuff than you could possibly find or download right so these next generation applications we haven't really figured out what the tooling is the configuration system not quite sure who's going to win here what's the discovery what's what's what's the service discovery model how do you route stuff between them maybe it's weave or socket playing or whatever or the darker stuff how do you do observability maybe that's Rancher or something you know there's a lot of people working along these things and then the data stores ephemeral data stores that you know just run Cassandra in a container if you lose the container you just start another one and it'll suck its data from the other containers and that works well enough for Netflix to run so that's at least plausible orchestration which is what cluster HQ are doing yeah I'm deploying a container and with its door here and I want to move it so I want that data to be persistent and move it around or database-as-a-service use the Aurora which is the my sequel as a service thing in it that Amazon has now or DynamoDB or the Google equivalents or the Microsoft equivalents you know you just access a database and don't worry about it so the number of ways of doing that and different people will use different ways and then lots of different orchestrations choices that moment is sort of a battle going on between all the people saying well I have the orchestration no I have another way of doing it and who's going to win there and then you know assembling components how do you develop these components how do you test it how do you make sure it really does run on Google and Rackspace and Amazon and on your data center without actually having to test that every time right so so this is rapidly evolving and one of the fun things about playing in this space right now just to wrap up then there's a few things so I'm sort of looking forward one is that almost everything that I see that's new is written in go maybe three-quarters of the new projects I see not saying you should write everything and go is just like I'm just observing that people are writing stuff and go as a bit of scholar there's a few other things kicking around but it's really taken over as the language that new things are being written in and just just to state that so there's something going on in that ecosystem this is what's happening in enterprises is a book that's finally hit the presses you can go get a copy of this now lean enterprise enterprises are adopting continuous delivery DevOps and lean startup at scale that is a fantastic opportunity for companies in the enterprise IT space to sell into these people because they're trying to solve these problems we're right here they're desperate for solutions in this space that that's a fun opportunity finally just the whole monolithic versus micro-services it every time you go to a software architecture conference half the talks mention micro services now it's becoming to the point where that you really want another micro service talk well it turns out the people in the audience want it so you know we're try not to get bored of it too soon as a you know these names we're out a little bit sometimes so it's sort of a buzzword I think at one point last year before that really became a buzz word I was at a Q Khan London or something and it turned out the architecture track we're all talking about nobody had put Microsoft's in their title those all we were talking about so it's ok this is a right all right it's the last slide I've got a battery ventures which is battery calm Adrienne Co at battery calm if you want to email me or my Twitter handle is also Adrian Koh I have a blog perf cap just I've been up dates very often lots of stuff in SlideShare and I've done a bunch of talks this year and there are videos of these things and different subjects if you're interested in cost optimization I have a whole separate presentation on that I've done as well ok and maybe you can come up to and we can sort of take some questions to wrap up micro-services as far as designing that goes how would you deal or what was your experience with building or building trusting or NJ for example within that micro cells and as you start looking at you know what there is not load balancing by wall now would that be better off handling handled at the micro sense layer or is that a shared service it's okay so the question about how do you incrementally get there really how do you incrementally get to micro services so the there's sort of two pieces is one big piece of it is the back end right so you've got if you have a monolithic data store behind your system you have to figure out how to break it up so you take an individual table or materializer typically that you know your my sequel is like you're a dumping ground for everything that anyone wanted to put in there's a bunch of unrelated stuff in there so you take something that really is an unrelated piece of the schema split it off put it in its own cluster run it on some no sequel thing that is just like a single table right a single single view it's much eat it and run one of those so run lots and lots of database back-end so B and then put in front of that put a data access layer service and then never access the database directly so there's only one service that ever talks to the database and that's the data access layer for that thing all other traffic is a rest call into that service so it's a stash the storage tier as a service over HTTP stash with two ways is a open is that one of the projects that Netflix bill that does that it's got my sequel and Cassandra backends and it's a rest front-end that talks to both so what you can do is take your existing system put a West web service in front of it and make change your existing business logic to use that instead and now you're still talking to the austat one back-end but now you can split off that code without because everyone's going through a web service to get to it so there's there's this sort of step by step layer of splitting up the backend that the front end is it's a similar problem you have to split things off what you actually do is put in the Web API proxy tier so in front of your web server or your API server or whatever monolithic app you've got you put that Zul which is the Netflix thing or Apogee or or something like that and you start splitting up the traffic so first it's a pass-through then you say well if you hit this URL go to this web service if you hit this you are go to this web server so now you've got a bunch of web services each handling a single URL sub sub URL but it looks like one endpoint where there's one endpoint at the proxy tier and you can do authentication you can rewrite things you can do a be testing you can do your version management and play all kinds of games because you now have an abstraction and if you're building something from scratch the first thing you should build is an API proxy tier before you build anything else you just stand up an API proxy that fakes out an entire API with a with dumb fixed returns for everything and then you can build the client that talks to that while you're building the individual services that make each of the return values actually dynamic instead of fixed and then you're not lots of little micro services so that's that those two kind of lay it so it's interesting it's like soft computer science right add layers of abstraction until you've solved the problem right add another layer of abstraction yes I know there's an extra hop through the network you know it's another 1/2 millisecond of network traffic or whatever you know and you've got to go through so yeah it makes things slightly slower but the flexibility you get and the agility you get pays back so that's that's the trade-off between operators but you had DBAs in the development side as opposed operations is there particular you can draw them on either side I tend to think of like whoever owns the schema the schema for an application is part of the application really like the raw I mean you can think of the DBAs as either open you know someone's imposing in a schema on me and I just have to make it work right by adding the right indexes and making sure my sequel or Oracle is happy it so that's a DBA role or you can have well you know I'm really going to design the schema for this application which is sort of a back-end developer role so yeah you can draw it on either side really it's not particularly an issue I mean if you have some if you have a no sequel database that's key values like there wasn't really a schema there anyway it's pretty simple there are some patterns you can use or you know to make something like Cassandra for example it's a there's a there's a things you can do that are bad or good but it's relatively simple one of the examples I had was we had a team migrated from my sequel to Cassandra and they're very worried about it because they had all these complicated things they wanted to do when they actually did it is that all there is is it really doesn't do very much does it yeah it didn't take very long to figure that out no okay oh that's alright we can figure this out and we can build on top of it but they were really worried that they were bill only had to learn something as complicated as you know my sequel is a complicated language there's a long time to learn yeah what Cassandra can do or what react can do it really doesn't do much so it's actually very simple to learn so the transition to it's actually easier than most people think but you have to give up transactions and asset and all that stuff that you live until telling you should have in college you have just unlearn all that stuff sorry you just wasted a year in your yeah CS degree in the back but obviously with the proliferation of services traceability the context of something goes really wrong right so cab McNab burger which then led me to thinking okay well standardized logs if you do have all these micro services do you then have shared models to make sure all that's you know log in because it has to get me and then tracing them what's the interaction between the shared model team yeah so the questions about when you've got all these Microsoft's talk to each other how do you have a shared model the Netflix approach for that was remote using Java code we're mostly using the same code that ribbon is the client-side code and I think carry on is the server-side code so everyone's using the same basic code base so you can instrument the the things that everyone the reusable components so the reusable component knows how to talk to another service is instrumented you don't have to think about it you just call that code and the instrumentation is baked into it zip kin and finagle from is the same thing it's pre instrumented and they log in a standard way that the Netflix code has a standard annotate able object right it's there was one object that gets logged and what that means is that when you finally look at that object its flows into sort of Kafka and Hadoop and whatever the column names are very stable there well well well defined because the object defines the column names and they're not all populated by everybody but they are but if this thing exists it has one name so you can build tooling that works across all the tools for that reason right so those those are some of the tricks either use the standard annotated object if you've got more generic things going on or just bake it into the tooling to do it visualizing that is the other problems I showed you all those like you know you know Deathstar diagrams as I call them as there's just a big round blob right it's still hard to figure out how to visualize but if you're looking but from the bounded context of one developer I have my block of code it has you know ten dependencies and five consumers so I have a blog in the middle and I have things and that's it that is my world I don't care what will the other hundreds of services and consume dependencies are my world is I have five people that use my code and I depend on ten other people and that's all I need to know about and that is a bounded context for the developing one thing while I was trying to I thought that the overall architect for I was staring at dynamics yeah what is hell's going so the end-to-end tracing is something that the Zipkin tools have things that will show you that the south tool that I mentioned Netflix built they've talked about in public they have an open sourced it yet but it uses the Netflix tracing and basically gives you those views a lot of the tools out there working towards building that part of what I'm doing with playing around with my d3 stuff is to try and find ways to visualize that a bit more so it's it's I think it's emerging these systems after a while have hundreds of micro services you know Gil had 450 Netflix has 600 groupon has well over 5 600 yeah it's normal to have somewhere between 500 and a thousand micro services to deal which is more than you can really render in one screen so you have to find a way to visualize that that shows you just the stuff you care about and hides everything else and that's one of the sort of emerging things I'd say that's kind of in terms of monitoring and visualization that's sort of bleeding edge is how to do that and there's there's a some interesting work to be done there and on our side is probably just worth mentioning a project on our github called Elliot the tagline for that is logging a storytelling as my colleague is Marc manages that and the idea is that you can do tracing across process boundaries and across services and the way it works is it builds up a tree structure of the events that happened and then you can pipe that into an elk stack and get visibility into what happened across the distributed system so that's kind of useful yeah there are no flix uses the elastic search as well it isn't always a tree it's problems that there are services calling each other and I calling someone else that calls back it turns out that it isn't a isn't a directed acyclic graph the recycles in it and it's there's all sorts of it shouldn't really and I complained it against the people that doing it but they still did it so it turns out things get messy how do you move to micro sources where the services need no access control about something else another service the access control side of it well there's this sort of one answer for AWS and clouds and then I'll answer for data centers right so it if you have the standard pattern that Netflix used was that for every service every app name there was a soap there was a security group with the same name so whenever you create a service you also create a security group with the same name as the tooling just does it for you and that service trusts anything that is in that security group right so in order to call a service you have to join the security group that is the same name as the service so call service they have to be insecure so if I'm running security service a I look at security group a and I see there are four entries in there those are the only four people that could possibly be calling me because the network won't let anyone else in so you get a lease privilege sort of approach to the network flows in the system and it's a very lockdown system and if you also use key stores to encrypt everything and in the Amazon has a key store so you can download keys and manage keys very carefully you're very fine-grain keys and I can use individual keys for encrypting particular types of data and I can check out just the keys I need for that kind of data and then you have I am roles and identity access management where you only have permission to do the few things you're supposed to be doing so you can build up an entire system where everything that's happening in the system is based on lease privilege and it's you can automate that to the point where it doesn't become a pain in the neck to do that and if you try doing that normally it would drive you crazy right this way it doesn't normally happen you be let's just build a nice crunchy perimeter that's hard you know harden our perimeters and we just mess around with the stuff in the middle but what this way every individual machine is is like a marble you know a bag of marbles like you can't you break into one which is hard and you find you've broken into and the Machine disappears an hour later anyway because it's dynamic it's an order scaler and you're done or if you broke into a machine running lambda like it's gone in three seconds as completely you're gonna build a totally secure architecture do it all with lambda as I I don't see how you could possibly get break in and do anything at all with that it's so locked down you have to have security groups and I am roles to talk to invoke things back and forth so I think there's the ability to build extremely highly secure and resilient systems because it's programmable infrastructure because you can automate all of the hard work of setting it up that way into the into the platform and some platforms are doing that a bit like apps are kind of do that sort of stuff but most people haven't really got there yet but I'd say that's the way most of Netflix is architected is with that kind of mindset but if you wanted to build something like that in in a real in a normal data center you'd be a lot of tooling to go do it and it sort of baked into the way that you mean most of the big cloud vendors have that kind of model already texture is that it works around problems you have very large code bases and with very large teams but fun say an early stage startup looking at you know less than ten people is there anything about this this is breath is relevant there's it's only relevant for large organization I think so the question is about desserts what does as Microsoft is helpful with small teams I think it's definitely needed at scale and it's also helps you get speed so if we're a small team like you know do you want to configure and build your own engine X or do you want to just download it off docker hub right so that's now a micro service and its new a separate thing and you maybe you've got some nginx radius with the standard off-the-shelf things that you the building blocks you don't need to curate them and hand build them anymore they're containers you run them and then you run your code in the middle and your code might be monolithic to start with because there's only a handful of you building it but once it stabilizes and you want to add a new feature do you want to destabilize your existing code base or do you want to start a new code base in a new process in a new container that has the new code in it so as soon as you start versioning any rapidly versioning things you want to use containers to manage the versioning you don't have all the old versions in stable versions in containers that are immutable that don't change and then you put your new code that's you know that's flaky maybe because you've only first just finished writing it in a new container you don't combine those into one code base and try and deploy that because then your your changes destabilize the the old stuff that's part of being able to run fast it's like running fast with scissors but without hurting yourself somehow and maybe the analogy got a bit lost but you know it's a it it makes it safe to go really fast because you're always deploying immutable containers with the new stuff in it and the old stuff still there so the micro services sort of helps you go faster even with a small team and my aim model is really that I can't that a micro services relax one person's work it's not this to Pizza team thing that's like a whole product right it's the to pizza team the sort of the the Amazon approach but an individual containers should be one person work and you need to code review it with someone else so that you can because you're responsible for in production if it breaks you'll get a call and you want to go on holiday occasionally or sleep or something so you want somebody else to be able to take over managing it so you tend to buddy and pare program with somebody else from the point of view of not a black pair programming coding point of view but in terms of code review and support so there's usually a few people that know how any particular thing works one primary writer and a few people that are reviewers that can fix it and operate it and know what state it's in and that a new buddy up in teams that way so that's that's kind of the model that works I think that scales down to just a handful of people so the complexity of managing the bits is really a platform problem and in 2009 nephrons had to build its own platform from scratch there was not a platform that would do what we wanted to do nowadays the platform's you just now I just showed you a bunch and download we pick your language download the one you want if you want to write code in Scala go download stuff from Twitter and guilt if you want write stuff in node go download the node stuff you know whatever if you want to do you know the Netflix or Cloud Foundry or what a docker sort of environment you can use you know that there's there's the platform's now support stuff right so you're basically even with a very small team you're standing on the shoulders of giants kind of thing there is an awful lot of stuff that's already been done that's pretty solid that you can stand on so it's getting more and more productive and easier to build things the most successful approach is to interface versioning versioning it's the versioning is one of the things that there is no good way of doing it but there are some less bad ways of doing it there are many usually you get a huge arguments over the right ways to do versioning as a whole long discussion over that the so if you look if you look at the sort of the rest interface into a service the first thing somebody will do if they see a rest interface is build a little piece of code that knows how to decode that rest interface then they'll give it to their friends and next thing you know you have a client library even if you didn't want client library because it's supposedly self-describing you end up with a client library anyway so if you're building an interface built the client library as well right so my first like rule is that even if you it's supposed to be self describing you will at least in one language build a client library that does the proper error handling and knows how to encode and decode the payload right in some efficient way and it should basically be self-contained and have no additional it shouldn't drag in lots and lots of dependency it should be a very self-contained sort of pojo with its java or something like that so do that and then everyone will copy that library if they need a different language right but then then the act of using an interface is that you import that library right in your build and that's it you're kind of done and then well if you don't trust the library need to wrap it in I don't trust you which is what the reactive model is so you wrap it in a circuit breaker run it in its own thread if you this is sort of the Java model that Netflix has so you wrap it in a circuit breaker you fires I don't trust you something too sprung you're on your own thread I'm going to ask you to do something if you don't do it I'm going to sort of shoot you and move on and do something else and flip the circuit breaker saying this thing isn't working anymore and then point the finger at the downstream thing that is now officially broken and it may be the a baby the client library that's broken but something's broken so this is the that's the the model that evolved over a year or two at Netflix for doing that the versioning itself there's lots of ways of versioning and interfacing but stuff in headers you can put stuff in the URL and people argue about the right way to do that and I think it's done just about every possible way in most cases so that's so hopefully that helps Groff or software-defined networks in some senses so Sdn is the sort of data center people trying to build V PC and things like virtual private cloud kind of stuff you know you can provision dynamically with software all of those things that used to have to kind of poke at the Cisco UI to do right or command-line so you want to be able to automatically deploy you know VLANs and subnets and layers of protection and things like that it's it's somewhat orthogonal to the micro server space the way I the way that to micro services or if you've got security groups you basically are programming a firewall at every level so instead of having one firewall and two VLANs between them you just start deploying individual things if using something like weave you're kind of defining you know a network layer where you can do some interesting sort of overlay networking between things so that good that's interesting I think that the interface is for managing it or a bit flaky right now you know sort of the OpenStack Neutron is a fine as long as you've only got a few machines and as soon as it gets to be too big people stop using it because it doesn't really scale so there's a bunch of problems like that with with a lot of the Sdn models that are sort of data center oriented but the general idea of you want to make the network infrastructure programmable is important right so there are certain things you really don't want to hide behind layers of firewalls and that and things like that but and you can do that programmatically in the public clouds all have ways of doing that but being able to do in a data center as a is a good you know a good approach and ovn it's kind of like a new way to virtualize a network to make it more lightweight it just looks like a couple of days ago said pretty OPN I hadn't looked it up in yeah okay any more questions one more question all right increasing did you do that so you had mentioned a small notice that there's a consistent theme running through your descriptions micro-services of platform consistency and language consistency except the edges so let me just to describe the to put the question into the context there are some startups in smaller companies which hear about this pattern do microservice architecture and end up with a team of let's say six people with 19 microservices and seven to twelve different programming languages which and there's a sort of death spiral that happens once you have that where a shift the overhead of shifting between the sort of conceptual frameworks becomes really staggering time do you have any ideas in terms of like guidance for how how consistent to make that language and architectural level so they even they went to work medical services that show and also like how big should an individual service be so that people head around right so the questions about if you really let the micro services stuff get out of hand you have too many versions of everything because you could write so there's there's that tendency to go too far there's a couple of points there the there's a common platform in every system which is that you need to find a service there's a service discovery system right so you need a set of libraries that know how to find each other and then there's some common logging and analysis and monitoring stuff so that tends to constrain the number of different patterns you have so Netflix originally Java based and then it was JVM Bay so you could do groovy and you can do closure and you can do whatever you know Scala but it's but you can still calling into the same JVM so the platform is a bunch of jar files right in regardless of what language you're using on top and then a bunch of guys said well we really want to do stuff in Python so okay you go to Python so they built a bunch of Python interfaces that talk to the platform then they discover the platform is evolving forward so fast that they were spending all their time updating these Python libraries and it and realize just how much overhead it is to try and stay in sync if you've got a rapidly evolving base platform so keeping multiple language platforms in sync is a tax so it's a bit like Apache anyone can for capatch you but no one does because then you'd have to keep rebasing it to the original project so the economic cost of forking the project is what stops you doing it but you bet you you buy into Apache because you could if you needed to right so so it's a kind of a similar sort of thing the cost of forking the platform is is high so you don't really want to but you could and if there's a really enough value you can and some people do so that you get a number you get a few sort of base platform libraries that that are capable of and there's enough investment so the tooling ends up being sort of like a I have a nice clear path here that I created and if you want to go foraging through the undergrowth you can and maybe you'll find out something interesting because closure is wonderful or something whatever so you go off and find something you figure out the tooling for that and it's so cool that a bunch of friends help doing it and you clear a path right so if you don't clear a path that is like a set of patterns and tooling then you're you know you're just a loner right and so if you end up with a bunch of loners then it's going to be difficult because you're like wandering randomly so so you want to have the tooling and the platform be patterns that create sort of beaten tracks through all of the different options there's too many ways to do everything right so you want to have standard ways of doing it that people can learn and that can copy and it becomes easy I just have a problem to solve okay everyone else's this is the standard tooling to do it so that so there's a natural way that people will just end up following a path right so when you're creating youth but you don't prevent people from create from trying new paths so that's the way innovation happens and that's why new languages come into the system and new new tools and new ways of doing things so you want to let it evolve but there is a cost of being the first person to try something so sometimes you start explicit Pathfinder projects to go try something differently sometimes it's hack days where somebody goes off and does something weird and people say that look cool and then more people pile in right so that so I think that that's the way to think about it certainly having everyone in a company use a different language and different tolling is not going to be particularly helpful outcome think so so in that sense you can go too far all right so it's like yes do you see that video those ads there was some sort of millipede and there was this whole pile of ants dragging this thing 100 times bigger than an end and it was one of those videos who comes by on Facebook occasionally they'll ganging up on it it's amazing so what happens when you when everyone collectively knows what they're supposed to be doing and they just blindly at dragging this thing we're going to solve this problem together motivation motivational pretend you're ants dragging a millipede there you go we'll end on that though have some more beer okay you
Info
Channel: Rackspace Developers
Views: 64,837
Rating: 4.9276018 out of 5
Keywords: Microservices, state of the art, adrian cockcroft, battery ventures, san francisco, rackspace, geekdom, meetup, clusterhq, docker, container
Id: pwpxq9-uw_0
Channel Id: undefined
Length: 76min 26sec (4586 seconds)
Published: Mon Jan 19 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.