Building cloud-native applications with Kubernetes and Istio, by Kelsey Hightower

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning that's what I get as a response oh now I'm gonna go back to my room if this that's that's how we're doing things all right so we're gonna be talking about sto today and you've been hearing all comforts long about policy a serve ability security and reliability most people don't have the ability to do any of that actually mainly because it's not as easy as most people say so what I want you to do is really think about why we think about all these things and an sto the tool why it's important for you now one thing as an industry that we have to come to terms with is this one statement I believe the majority of the people managing infrastructure just want to pass there's only one requirement it has to be built by them no one likes Amy off-the-shelf things the famous last words of most people in tech I'm gonna roll my own and once you do that you're setting yourself up for tragedy now a lot of people have started to think about building their own paths and most of the technologies you've been hearing about the last couple of years a lot of people are starting to adopt things like docker and yes why doctor why is docker becoming the foundation of most platforms that people either buy or build and a real reason most Dockers is important for most people is because it's like the first time people decouple their application from the machine all the hype around docker there's real benefits and this idea that I can have an application that's self-contained is where the first stage of portability comes into play and then once you start to have things in docker it makes it super easy to decide where you want to run that but that does not solve all your problems anyone here think docker solves all their problems I have snake oil to sell you once you get to docker you start to think about the next step and this is where most people start to examine these cluster management tools like kubernetes or meso s-- and the goal here is that when you adopt one of these platforms they also did couple you from something but this time is the infrastructure so once your application is the couple from the machine do usually the last mile is decoupling yourself from the underlying infrastructure whether that's virtual machines a cloud provider your laptop and kubernetes gives you a new set of abstractions that allow you to kind of run that at scale anywhere that you want to and people have been running kubernetes for a couple of years now in production so it's no longer this new thing that no one knows how to operate and then those people have learned over time kubernetes also has missing pieces now it's a much better starting point to build a distributed systems or your own platform you know all these API is built in but what's missing so when you go on and ask people what should I do to actually run my applications whether it's micro services or monoliths there are things that you have to do that we mentioned earlier on and then they'll recommend products like this nginx will deal with things like application management and how you get traffic to your application you have things like open trace and they give you some visibility on what's talking to what where's your latency and then you have things like vault for secrets management and all of these things have nice API s and Prometheus provides this nice adapter to be able to scrape metrics from each unique application in your stack and then the last box most people are familiar with that is this new project called spiffy and what spiffy does is it solves the real challenge most people have we're all attune - sending secrets to our applications here's how you connect to the database here's how you connect to this other thing but the truth is we have no identity for any of those applications we actually don't know what we're sending these things to so things like spiffy tried to give identity to our applications no matter where they're running so what you have all of this you start to ask yourself where does sto fit into all of this and just like docker and kubernetes the goal is that sto doesn't replace those things but it integrates with them so that you don't have to if you think about all of those tools all the right things to do I think we all are at a point where we've heard enough we've read enough we know that we should have those our ability but how easy is it to actually do most people struggle even if they only have one language to implement all of that in then someone shows up at the organization is like hey we're doing Haskell now you're like but why there's almost no good answer to that one but but if you find yourself in that situation now you gotta go find libraries that implement all that security tracing and everything and this is where the challenge comes in you usually can't do that across all the stacks so what sto does is tries to take a different approach so what's the how around this so the way sto tries to attack this is by sitting in the middle of the traffic flow between the services and this is where we start to say maybe the network should be smart because this is one place that we know everything has to come in and has to pass through in order for this to work and once you sit in the data path you can actually start to aggregate things like the telemetry data and then you can actually enforce those policies and I'm not just talking firewall rules here in security groups all right that stuff is very elementary it has no concept of what the app is attempting to do it's just on or off we need a little bit more insight here so I think the best way to really think about this is like how does it get implemented so in the SEO world does a control plane and most of these systems have some form of a control plane where you can describe your intention now this is where starts to depart from traditional tools not just command line driven but you can actually hey this is my entire policy for my entire infrastructure so doctor gives us a way to describe how our application should be built how it runs at the low-level kernel piece kubernetes gives us the same policy type for the entire infrastructure that's everything underneath the application and then what about all the stuff above the application and around it this is what SEO does the same model but applied at the application layer in between the network to form this mesh and then there's this other thing called back in infrastructure when you think about your logging and your metrics this is what we consider your back in application infrastructure someone has to manage that and so all of those things are managed in STL's world through the mixer by these adapters we will never be able to abstract away all the ways to do tracing or all the ways to do logging but what we can do is map the events that come out of our systems and map them in a way that these adaptors can just do all the hard work and putting the data in the right place and then the final piece is how do you make sure that these policies are enforced so we do it with the sidecar so a lot of this is actually built to generate a config that we push down to each node now once you have the config in place the only way to really experience what sto looks like is to see it in action so we're going to do that now to get a look at how this thing actually operates now the goal here is to use a real-world application so you can kind of know how this works so I kind of written my own little small micro service setup and then deploy it from scratch and see how do we actually roll is tío into place and what would you what do we get when we do okay so switch to that so the first thing we're going to look at here is I have this kubernetes cluster and it has a bunch of nodes in it and again the whole point of running kubernetes it doesn't matter where I'm running is it could be my laptop it could be a cloud provider on my own data center and we restrict it away from that and then we need to deploy our application so the first thing we want to do is just deploy a simple stack of our application so here first and say Kushi tale apply f kubernetes deployments and then we have a couple of pieces we have a back-end service call bar and then we have another back-end service called foo we have another one car fronting right so this is what most people are doing with kubernetes it makes it super easy to say hey this is how I want my stack deployed and once it's deployed we can get a little bit of information about what's running so here I'm just gonna get all the applications that are running that v1 great everything looks like it's running so at this point I should be able to hit it so this is really straightforward I should be at a pinning the application and all it does is telling me hey you're talking to the front end and you're getting v1 of each of the back ends right pretty straightforward I'll make it slightly bigger so we can see it so the application works now the problem with this is most people get this far and then they stop so you go to your dashboard you see anything yeah no matrix you want to Best Buy you bought all these flat-screen monitors and hung them around the office people come to visit you you've got that you guys are doing absolutely nothing so you're flying now this is the situation most people have and they rely on some true tribal knowledge to debug it's like no no you got to go to that server the special server and the person I can access that server is on vacation right now so we got to wait and we got to get out of that situation so we want some visibility now one thing we could do is ask all the developers to instrument their code in a way where we get metrics into the system what's the chances of that happening yeah you laughs close to zero so we need a better way so the way we could do this is to think about this sidecar pattern so without approval we can go in and attach the functionality we want to the system so let's try that now so we'll say coop see tail apply the same thing but this time we're going to inject the sidecar and now you can do this on the server side soon but I'm just going to do it here so you can kind of see how it works so it's CEO CTO Kubb inject - F and what this will do is inject the sidecar in the same manifest so we don't want to touch the application don't want to really touch the deployment descriptor but I do want to attach the sidecar to the process so we'll do the front end we'll also do the dependencies and then we'll do both of them so this should be foo live demos all right so now everything is running let's look at what the state of the world is now so at this point version equals v1 so you notice here that it says two of two so instead of one of one we're just the application our side car is now running side by side with the application so at this point I can actually do this I'm going to now communicate through this I'm just going to put it in the watch loop so we can see what's going on here so let's just throw this in the loop and we'll start to see that we're depending on one of our sub dependencies is V 1 bar now once we have this running let's check out what we get for free so without the developers doing anything the sidecar sits in the data playing and can look and grab all of these metrics for the application so not only is it just grabbing metrics about how many requests per second understand things like HTTP HTTP to GRP see even certain database connections it can handle all the things like retry logic exponential back-off all the things you should do to have your application be production ready the sidecar would do for you automatically not only will give us the metrics but it will also give us things like tracing so as these requests come in how is it flowing through the system we come here and we can actually see some of the traces so we see that our front-end is talking to bar and then it talks to foo and we can see that it's doing in a serial fashion and maybe these should be done in parallel but that visibility here you'll just be straight-up guessing so now we can actually see what's going on all handled by the sidecar and not updating the application the other things that you can do here is you can imagine a world where you're doing the whole DevOps thing and your dev team gets to go and do all the deployments that they want and they're always responsible right so there's never anything to worry about that's like so what we're going to do is deploy a new version just to kind of simulate what happens here so we deployed v2 now normally in a kubernetes deployment when you do this side by side given how my service is set up traffic will go to v1 and v2 now once you inject the sidecars you have some policy the dev teams like hey I don't see V v1 well the good news is I'm in full control now regardless of what people do it's invisible to them that I have this control now if I wanted to show you what's actually going on here I can actually delete one of the the route rules that I have in place so I'll delete this role and what you'll start to see is the v2 traffic start to show up okay now that's not what I actually want so I'll put my rule back in place and when you look at this rule it looks a lot like the way we describe our infrastructure to kubernetes we can just say hey anything destined for bar in a particular namespace here's how I want that traffic handled so I'm going to go ahead and put it back now so I'll say sto control crate - F and of course you can automate all of these things there's no need to actually have to do these things manually so we'll put this back in place and as soon as we put the policy back all of our traffic starts to reflow and we don't need any of the support in the underlying runtime we don't have to coordinate with the networking team we actually have full control of the API now at this point this is the perfect time to have a conversation so we're gonna try something now the SEO API is really robust that you can actually rethink the way we interact with our infrastructure so why not try something new all right so one thing we can do is we can actually start to talk to our infrastructure like one day in the future it'll be like Star Trek you'll be talking to your infrastructure I promise you let's try it okay google talk to sto sure let's get the test version of SDO API hello Kelsey hello this is a pretty large crowd you have here I hope you're fluent in the sto commands I noticed you're using conference Wi-Fi the demo gods won't be too happy get the front end topology got actually turn the mic on get the front end topology the front end service depends on service whoo and service bar this is not what I would consider web-scale it's more like huge so how is it able to get this information so the nice thing about this stack is we do have an API that will tell us all of our dependencies based on data that's flowing through the system so here's the raw data that I'm using through my API but I also have the ability to look at the graph and you see this graph and will actually draw out all of my dependencies based on the traffic going through all the side cars we don't have to ask anyone to instrument anything we just get it for free now the other thing is you can exercise your power here say devs aren't quite listening to you deny access to the bar service access to the bar service is prohibited from the front end service so what you do is you wait and when you wait you see the four hundred and five hundred spike gotta let him know who's boss you'll see the site is down like I know I did it so then you be responsible you put it back allow access to the bar service access to the bar service is permitted from the front end service so what should happen is we should see the four hundred and the five hundred drop and it turns out they really need that v2 application but the thing is some patterns just don't make sense for all application types when you're dealing with the new stack that we're dealing with you're not just dealing with browsers sometimes you're dealing with mobile devices or even cars and some of the updates that people are performing need to be strategically rolled out in a canary pattern so ideally we may say hey show the new application only to the mobile device so what we're going to try to do is see if we can do that surgically now this is something that's really hard to articulate in kubernetes there's no way to express this because that's not a concern of it it deals with the underlying infrastructure we need something at the application level that can do a slightly better job so let's return back to our command line and we see that we have our first app in the loop though we're gonna do is change the next one and we're going to set down here let's make it a little bit bigger we're going to tell curl to set the agent user string to mobile and we're just going to run it here so at this point we have both applications once and simulating a mobile device the other a browser and they both get to see v1 now what we want to do is only allow traffic to the v2 the mobile devices route mobile traffic to bar v2 read update complete traffic from mobile clients will be routed to bar version v2 thank you I gotta admit that was pretty dope so that's Sto I hope you check it out thank you you
Info
Channel: O'Reilly
Views: 48,823
Rating: 4.9610515 out of 5
Keywords: O'Reilly Media (Publisher), O'Reilly, OReilly, OReilly Media, PaaS, Istio, Kubernetes, applications, cloud-native, Building, Kelsey Hightower, Docker
Id: 6BYq6hNhceI
Channel Id: undefined
Length: 17min 33sec (1053 seconds)
Published: Tue Apr 03 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.