Avi AKO & TKG Integration - Part 1: Overview for Beginners

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hey everyone my name is trevor spyers and you are watching the first in a multi-part series covering vmware's tkg so that's tons of kubernetes grid the integration of that with vmware's avi network solution so that's vmware's enterprise advanced load balancing platform if you weren't already aware recently vmware made avi the default ingress and load balancing operator for all tkgm deployments and that comes with some really distinct benefits for vmware customers one that means if you own gkgm or you buy the tkgm license from vmware you're going to get a basic version of avi to use as the load balancing in ingress for free with that tkg license two they've massively simplified and streamlined the deployment process of tkg with that ako or avi's kubernetes operator integration it's now really really simple to deploy and something that you can have up and running really really quickly and number three this what this means is that you're really bringing all of the rich value of the avi networks load balancing solution into your kubernetes clusters okay so that means all of the features of avi of course load balancing global site load balancing wife etc are now exposed and can be exposed to your kubernetes cluster and on top of that you can still take advantage of avi's rich analytics and logging capabilities to pull that into your monitoring or even just your daily operations of kubernetes directly through the avi analytics dashboard i'm really excited about this one i learned a lot during the installation process my goal here is really just to share with you what i've learned this took me a couple of weeks to get down i'm hoping that by watching these videos you can get up to speed much faster than i was able to if this video helps you please subscribe or if you think somebody else could uh benefit from the series you know share it with them you know send them the video maybe this can help them get their own deployment off the ground all right time to get started hey everybody today we're going to take a look at the brand new tkg and ako integration i'm going to give you an overview of really of of the ako tkg integration and installation process i'm going to try to boil it down to the simplest terms possible so that you could get this up running in your environment whether it's for a proof of concept whether it's for an evaluation or hey maybe you even are planning for a production rollout of some kind of kubernetes management system like tkg my goal is for this to be a kind of one-stop shop series that can help you get that initiative off the ground just a little disclaimer up front i want to let you guys know this is recorded in early 2021 really right after version 1.3 of tkg and ako came out so i just want to uh mention that in the video because this is going to be constantly changing and evolving i also want to say i've been learning a lot just kind of through brute force playing around with this technology i i'm certainly not the end-all be-all expert on this topic and i say that to say you know i'd like to learn from you too if you see something in the video series where you think hey maybe we could double-click and dig deeper into that or if there's you just have some feedback for something that you think can be improved or even something that you think might be interesting for other people to learn about let me know i'd really really love to collaborate a big reason why i decided to do this is because avi ako is now the default ingress for tkg and really eventually for the entire tanzu portfolio is my estimation right i know that because i was reading the documentation okay and so here are some links to documentation that are going to be really really critical to your success in rolling this out i'm hoping the video series will help you get this up and running in a pinch if you need to move quickly but hey if this is an environment that you're gonna care for and love long term please read these documents they are really invaluable and without these documents there's absolutely no way that i would have been able to create this video it's always a good idea to just ensure everybody's on the same page for what the tons of or kubernetes implementation that we're talking about is so i'm going to give you a quick walk through an overview of what's now called tkgm or tkg multi-cloud and that's what i stood up in my environment that's what the installation video will be a walk through for and so that's what i'm going to walk you through right now okay so tkg is really a kubernetes management platform right and the architecture is really not that complex if you already have an understanding of kubernetes you might find this to be pretty intuitive actually okay the way it works is there's a couple of tools that an engineer or an admin would use okay there's a tool called tanzu cli you would download that from vmware's website and there's also um coupe cuddle they might use as well on the tanzu cli you installed on a local box right or even on like a bastion host or something and you use that to provision what's called a management cluster the management cluster exists for for two primary reasons okay number one is this is this is kind of the hub or the bootstrap environment that you're going to use to provision additional kubernetes clusters but also the management cluster can can be used to do things like storing centralized services so you might want to run if you have a logging software maybe you have some sort of a monitoring or authentication service you may consider running that inside of your management cluster inside of tkg okay you don't have to you could deploy a separate cluster for that but the idea of the management cluster is that you spin it up and it sits there and it's it's it's going to be changing probably less than your workload clusters which we'll talk about here in a second so this can sometimes be the ideal place to put those central services that you always need to have up so from the management cluster once you have that stood up in your environment then the world is your oyster okay all the hard work is done at that point because once you use the tonzu cli to build your management cluster that tons of cli tool you can use and reuse to provision additional clusters in your environment so with a single command you can build an entirely new kubernetes cluster for maybe a customer or some kind of internal stakeholder or an application team right one command you can build that cluster out and another command you can scale that cluster out and add additional nodes to ensure that that cluster can host additional applications and and um maybe uh take more traffic right so the whole idea is that kubernetes right can be pretty complex and time consuming to manage all by yourself so this is really a value-add implementation of building managing those kubernetes clusters right and what's great is with this architecture you can build out a cluster like this one where maybe it's just a dev cluster um and we're just kind of starting to play around with it but maybe once that application that's built on that dev cluster is i'm ready to go live maybe you want to build out a more robust or more scalable cluster well hey with one other command using that tons of cli all you would do is type type a very similar command in boom you could have a another cluster for your production environment now i'm using dev and prod only as examples you could slice these clusters up in so many different ways it could be dev it could be prod it could be different lines of business within your organization maybe you're even servicing external customers and you're provisioning these for your customers there's so many different ways you could you could leverage this concept but this is the fundamental architecture all right you build your management cluster using the tanzu cli and then also using the tons of cli you build out additional clusters all right and then once the cluster's there it's it's just a kubernetes cluster right we have some integrations with other vmware products like avi which i'll show you here in just a second but from the perspective of a developer or somebody in ops it's it's just kubernetes so they access that cluster with coupe cuddle just like they would any other kubernetes cluster and the experience is all the same to them that's a high level overview of what tkgm is what is ingress where does avi play into tkg why is is this important why are we even talking about it well here's the thing by default in kubernetes um when you provision an application right you're gonna provision um that in in what we call various pods right and just for a safe example let's say i have a pod and in that pod i have three replicas okay so you can kind of see my three pods here that are applicable as well you you define replicas of kubernetes within a certain deployment to basically add resiliency and scale to your application so let's say this is like a web server i could say maybe i want three maybe i want a hundred different web server pods and i just change a single number and it can scale that up and down right well that auto scaling is all really cool and fine and dandy but here's the thing what about end users right end users they just want to open up their browser to access a piece of software and type in a url they don't really care about the pods the servers on the on the back end doing all the stuff they just need a unified experience right so we need some way to get user traffic into the cluster and we need to do it in a way so that way we can spread user traffic across our various pods that is where ingress and load balancing in kubernetes comes into play all right and that's where ako comes into play for tkg now kubernetes is really an orchestration and automation platform right it it orchestrates and automates a lot of different stuff but specifically when we talk about load balancing over automating is the provisioning of your vip so your load balancer and your pool members right so that's that north to south ingress egress to your cluster right we do l7 services so that's ssl waff you know maybe content switching redirects anything l7 and http type of stuff that you need to do with your application that's all automated by the native kubernetes services right and then we also will automate the provisioning of dns records right user's not going to access an ip we need an actual fqdn so they can hit that right so all this stuff is automated for you by kubernetes but kubernetes you know it still has to automate something right it's the kubernetes is not a load balancer right it is an automation platform you have to plug a load balancer into it to make it work that is where the ako system comes into play by setting up an ivy deployment and putting it outside your cluster so you have your cluster here that's probably running on a series of vms either in the cloud or on-prem and then you have your uh your application or ingress services deployed we're able to get that traffic in and out of your system very simply by leveraging the integration into tanzu all right i keep seeing ak i keep saying ako it stands for avi's kubernetes operator okay um kubernetes by the way an operator is a really common way for you to plug external systems into your kubernetes environment and that's really what it does the avi kubernetes operator it runs in kubernetes and its sole purpose is to listen to kubernetes so it listens and it it finds out when you provision a load balancer or when you provision like an ingress object and as soon as it sees that it's going to run all the necessary api commands to provision that for you and build that into your application into your infrastructure right so um specifically how this works is you're going to in tkg this is all deployed for you automatically right you can also use a helm chart and other such things this will all be automated the deployment of this in tkg so when you deploy let's say a workload cluster what's going to happen is an ako pod is deployed and it's going to be manipulating your ivy controller okay you're going to pre-deploy the ivy controller that's one action you have to do you have to deploy the ivy controller ova so that way you can tell ako what to talk to okay and so that's really it um there are some prerequisites that i'll walk you through in the next video of how to install that and and how to prep your network but once you have the operator in place once your controller is in place and everything is configured it kind of just works right your users will keep interacting with kubernetes just as they would traditionally but now whenever they specify ingress configuration or the specified load balancing configuration ako is going to automate the deployment of that and that means they're going to automate the creation of the vip the fqdn so that way users can start accessing your application all right i want to make it clear this is not limited to just provisioning vips all right basically anything you can do in avi is now at your disposal within kubernetes because of this integration so think about for example laugh right they're maybe not in this exact system but we also have ways to integrate gslb or multi-site load balancing global site load balancing right there's a lot of really rich awesome features that you can pull in by leveraging this oh yeah by the way did i forget to mention analytics if you've never seen obvious analytics you've got to check it out that is at your disposal as well for your kubernetes application by leveraging the ako operator to take care of your load balancing and ingress into your kubernetes cluster stop it's demo time all right i'm going to take you over to my environment and i'm just going to show you how i can spin up an application leveraging ako and then i'm going to show you what the application looks like and then in the next video i'm going to dig deep into how you would get this running in your environment all right you are now looking at my demo environment just to quickly walk you through what i've got i've got a tkg deployment that means i've got a tkg management cluster you can see here with my two tkg management vms and i've also got a workload cluster my workload cluster is called tkg test okay these are all hosted in vsphere i've also got an ivy controller and right now there's really nothing configured it's just kind of sitting here listening and you can see tkg test it's actually listening for stuff to be configured from my tkg-test cluster okay so what i'm going to do now is go ahead and pull up my vm i'm just going to run one command to show you if i if i run tonzu and remember tanzu is a command line tool that you use to provision interact with clusters in tkg so i just run tanzu cluster git tkg test the name of my cluster and you can see i've got a cluster running kubernetes version 19. all right and i can look at some details here i can also i'm logged into that cluster with kubecuttle okay so i can run kubeco commands and start manipulating that cluster so i'm actually in a directory that's got a gamma file with an application i'm going to provision it in in there so i'm going to show you the application real quick before i provision it so you can see what it does with avi all right all right so i'm in my file that gamble file for this application i've got to provision i'm just going to walk you through what's in the file all right so first of all um i've got i've got it kind of split into five different chunks first i've got service one and then i've got another one service too okay these are just services that my ingress configuration is referencing i just wanted to show you these because it's it's part of the provisioning process all right then i've got really the most important part here this is my ingress configuration all right so in my ingress configuration you can see a few things so first of all you can see i i've referenced here specifically to use avi as my ingress all right so you you would want to do this in your file unless of course you have obvi set as the default ingress which i don't but yeah this is an ingress configured specifically to to load balance between some pods and to do basically uri direction based on the path so i've got one path that's slash old this is going to route me to an old web server and then i've got my default path you can see the the path is just just basically anything and that's going to take me to my my more recent web server all right and then at the very bottom the last couple things i have these are just the deployments of my web server which i'm gonna which i'm routing this to with that ingress so that's the old deployment and the new deployment all right so that's it i walked you through the config file now i'm just going to go ahead and provision that application so you can see what actually happens behind the scenes so right now kubernetes is provisioning my pods and all my various services as specified in that file you can see right here here are those back in pods that i'm running and one is running an older version of my web server and one is running a more recent version of my web server all right so this is going to take just a second to provision and then i'm going to walk you through what it created behind the scenes in avi all right let me see if everything is provisioned now look at that all my pods are up and running so now i'm going to show you in the avi controller what that ingress and the various configurations i just showed you actually relate to inside of the ivy controller all right so now i'm looking at my application dashboard and you can see i've got my test application that's been deployed and you can see i'm load balancing to two different back in pools here right so these were automatically created based on those uris and those deployments that i specified for my main file path and then also for my my slash old file path okay um so avi creates the virtual service so the vip right of you also will create the fqd in if you have it set up to do so and it will do things for example you can specify the certificate that you would like it to use if you're using ssl right i'm not in this environment but absolutely you should do that this is a production application okay and finally in addition to provisioning the vips it's going to tie those vips to some back-end services okay so if i if i go into my vip you can actually see i go under the policies and i go to data scripts you can see that i have this tkg test data script and it's listening for and keeping track of the the uris that are coming in as i'm requesting access as applications so that we can do those redirects to the slash old or whatever path that is being specified whenever i navigate to my application all right so i'm taking back to the main view by the way uh mine the health checks are still running which is why it's showing bad health uh ideally this would be all green in your environment if i waited for a while all my stuff would turn green okay so uh let me show you i've showed you the obvi configuration right but exactly what's happening behind the scenes well now if i hit my vip address so i'll i'll click refresh a few times here if i click my vip address you can see i've got my application i've got that nginx app and this is running that latest version right so this is my my main url and i'm just hitting a nginx 1.19 server right so what if i specify that slash old url that i was telling you guys about right so as i specify that i'm going to smash refresh a few times you can see i've got this older version of nginx enemy so this is hitting a different pod based on that ingress configuration that i put in place for that slash old uri okay by the way ignore the fact that this is a 404 error i didn't edit the nginx configuration so it's not listening on this uri path but this was purely to show you that i could route the different pods by showing you the different versions of nginx but generally if this was an application you would have the application listening uh to serve up some kind of traffic from this uri and finally just to clean up my lab i'll go ahead and de-provision that application to show you that not only can you provision an application with this sort of system but you can de-provision and this is why this stuff is so powerful everybody now all i need to do is go in and run delete on that same application and what's happened now all of these services that are associated with the application are being spun down if i go back to my ivy controller i'll click refresh we should see all those back in pool members where'd they go they're gone right and those will be reprovision deprovision every time an application is spun up and down that's my demo hopefully that that made it clear if you want to figure out how to get to this point just check out my next video and i'm going to walk you through the end to end installation for getting a demo like this up and running in your environment all right everybody thank you again for watching the video i hope that this was useful if you have any questions please feel welcome to reach out to me directly i'm happy to help however i can keep an eye out for the next videos in the series if you want to learn a little bit more about this integration and getting it stood up in your environment until then happy micro services happy ako and happy automation all right everybody i'll see you soon bye now it's time to cheat edit this part out bubba turn off health checks because you're a cheater cheater cheater pumpkin eater
Info
Channel: Trevor Spires
Views: 1,404
Rating: undefined out of 5
Keywords: vmware, nsx, tanzu, kubernetes, ako, ingress, loadbalancer, avi, avi networks, modern apps, cloud naitive, automation, web application, load balancing, WAF
Id: xerPusUdzgw
Channel Id: undefined
Length: 23min 22sec (1402 seconds)
Published: Tue Apr 13 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.