Install Guide: TKG and AKO 1.3 (Tanzu with Avi on vSphere)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hey everyone i'm trevor spiers and you are watching an end-to-end installation guide of tkg and ako version 1.3 on vsphere this guide is meant to serve as a manual or guide for somebody who needs to get a tkg dev environment up and running or possibly even a proof of concept or demo environment for themselves or for some stakeholder for their organization so if you follow this guide from beginning to end what you'll end up with is a very simple tkg deployments that'll be one management cluster one workload cluster along with that avi kubernetes operator that is responsible and capable of controlling ingress l7 and l4 load balancing into that tkg workload cluster so i hope this is useful it was a lot of fun for me to make and if you have any questions or if there's anything i can help you with related to aka work dkj please just reach out let's get started all right so this is part two in the series this video is meant to be uh your tkg and avi sherpa i'm gonna be your kubernetes and avi shaman as i guide you through the installation of this system my hope is you'll have it installed and have a nice test environment for yourself or for your organization to begin deploying or testing your kubernetes clusters here's the versions of everything that i have in in the lab that we're going to walk through today i'm not going to walk through all the versions you can pause the video if you need to know exactly what they all are but you can use this as a guide i would still recommend you reference the compatibility guides for tkg and ako if you're moving forward with this especially if your lab doesn't match this exactly because depending on the version of vsphere avi tkg kubernetes there's a lot of different variables that will impact your your deployments i'm going to go ahead and navigate here on the vmware website to show you how you would get the bits necessary to install the command line tools the ovas and everything else that is required for the service so all you need to do is go find tons of kubernetes grid within my vmware portal by the way you're going to see this one called integrated edition um unless you know that you need integrated edition let's go ahead and skip that we're going to go today we're focused on the tkg platform the core platform all right from here you can download both tkg and the load balancer bits the load balancer all you need to download is the ova for the ivy controller but for dkg there's quite a number of components and you're not going to necessarily download all of these it's just what you need for your environment so mac linux windows you're gonna download your specific tons of cli implementation personally i'm using the vmware images for my clusters so i'm gonna be using version 1.19 and 1.20 to build my clusters out but you could use other versions of kubernetes and there are also ubuntu images if you if you wanted to use that rather than the more more kind of paired down vmware image finally there's coupe cuddle and some extensions again i'm going to highly recommend that as you're preparing your bootstrap environment you simply reference this document to make sure you get the components that are right for you all right so i've broken the installation process down to seven high-level steps and i'm going to walk you through those right now step one to installing tkg is that you need to prep your network all right there's not too intense of network requirements but you do need a few different things okay you're going to need a network to run your kubernetes nodes and that network is going to need to be dhcp enabled okay that's because these kubernetes nodes are spun up and when they're spun up they need to be able to pull an ip address to join the cluster and and make their communications back so you need to have dhcp enabled on that node network okay past that there are a lot of different topologies supported i think some people will probably deploy their service engines on the same network as their nodes i believe that's supported but i've chosen to have a dedicated vip network for my service engine so what you're going to see is my vips are sitting on a different subnet than my nodes i kind of like this model because i can i can just have a separate you know network for my vips and for my nodes it makes it makes more sense i think is probably more likely what you'd see in a sort of production deployment finally i've got my management network um again you don't have to have a separate management network it's this is just best practice it's a really simple environment you do need to have dns and ntp up and running you also need to have an ovy controller installed and you need to have some bootstrap environment i'm using a linux ubuntu vm but you could use a windows vm and mac you could even use your laptop it really doesn't matter all right i have one other connection noted here the obvses need some management connection back to their controller so that way the controller can push that config down same with ako whenever it listens for the kubernetes objects it needs to be able to push those api calls to the controller so that way it can make the changes to spit up and spin down your services on your service engine so these are important connections to the system operating i i just want to reiterate one final time this is not the only supported topology this is probably not even the best topology this is just my topology okay so please uh reference the documentation see what makes most sense for you if you have firewalls or acls in your environment definitely take time to ensure that you have the correct ports and protocols open um if your environment's wide open like mine it should be pretty simple things to just work as long as all your networks are routable to one another finally uh if you know anything about kubernetes already you probably know that kubernetes clusters they have their node network right and that's the actual ip that you give the kubernetes vms so the worker nodes and the control nodes but they also communicate and have different subnets for like their pods we're going to specify that as at deploy time okay but those networks are kind of interesting because they while they do exist within the cluster they're largely being encapsulated by geneve via andrea as as nodes are communicating between one another so while those are very important you don't necessarily need to prep your physical network with those subnets because those are all going to be encapsulated and tunneled between the the various environments if you're using entrea as the cni all right so if your network is all set up you're going to want to prepare what's called a bootstrap environment this document walks you through the step-by-step installation for some sort of bootstrap environment you're going to install tons of cli and coupe cuddle okay you're looking at the document that you should follow to get this installed this is the one piece of this that i'm not going to walk you through end to end because it does change depending on what you use as a bootstrap environment whether that's max linux or windows machine but it's really kind of simple this dock walks you through step by step you simply just need to install the tanzu cli and you need to install coop cuddle on that machine as well there's going to be one more aspect of that where you generate a key so that way we can build a trust relationship between the system to authenticate with your clusters but other than that this is a pretty simple environment and i've already got this set up in my environment but just to show you i'm going to run the tanzu version command and you're going to see that version 1.3 of tanzu cli has been installed on this virtual machine one important prerequisite is actually pulling the public key for use with your deployment of your management cluster so i need to make sure that i go in and grab the public key now i've already created my key pair for this vm and that's what i'm going to be using but you may need to create your own especially if there's an environment where you need to share the key or document that key somewhere else so i'm going to go ahead and take my public key and remember there's a public and a private key i need the public key specifically and i'm going to paste that into this notepad so i can reference it later all right so step three we're going to prep our vcenter environment okay again there's a document walking you by step by step how to do that creating folders resource pools things like that are optional i've created a folder in my environment but not a resource pool things like port groups and the data store those are not optional you need to have a network to put your systems on and you need to have somewhere to store the bits for those systems okay you're also going to download the ovas for the kubernetes nodes from vmware's website you're going to need a service account with the permissions that are documented here and you're also going to need to pull the ssl thumbprint from your vcenter all right let's look at that now i'm now logged into my vsphere environment where i'm going to show you some of the steps that i've taken to prepare okay so the first thing that i did is i created this tkg folder all right what i've also done is i've downloaded the photon kubernetes image from the vmware website and i've uploaded that as an ova okay i'm not going to show you this process but just so you know if you to get these templates all you need to do is download that ova right click nvcenter to deploy that ova and then walk through the the deployment process you don't even need to power the ova on as soon as you see it in the vmware inventory you can just go ahead and convert that to a template it just needs to be a template that's there so tkg can use it to launch additional systems i'm using my administrator at vsphere.local as a service account because that's the super user account it has all permissions but um the permissions necessary are documented so if you would like to create or follow a least privileged model which highly recommend uh please reference the documentation and ensure you provision an account that has the proper permissions all right now i'm going to walk you through the ovi installation process and this is the last prerequisite before we start installing and managing our clusters okay again there's a great document that walks you through this process step by step that i would highly recommend you follow okay so i'm in my environment i have pre-deployed my ivy controller just to save a little a little bit of time you need to deploy this ova uh yourself again you just download the ova you give it an ip and the relevant information to connect to your network and you power it on okay now that i've got it spun up i can just simply navigate to the ip address that i set for the fqdn that i have in dns set for this controller and so here i am logged in i'm going to go ahead and set my admin password i'm going to put in my dns information i'm going to use the default ntp but if you have ntp set up privately in your infrastructure be sure to put that in there this is just for some snmp information i'm going to skip through that okay now i need to select my orchestrator i'm going to go ahead and set up my vmware vsphere orchestrator because that is what's going to be leveraged by this integration all right so in here i'm going to put my username i want to put my password and i'm also going to put my vcenter's fqd in or ip address so that way my iv controller can connect and set up that cloud connector i'm definitely deploying mine in write mode and that is also what i would recommend for you to do it's going to make your life a heck of a lot easier one interesting thing you might see i am not using an sdn integration you do not need nsx to deploy a tkg cluster you don't okay uh nsx can be used as a cni for tkg but generally speaking if i were deploying this in my environment i would not use nsx unless i'm already a heavy nsx user very experienced with the product instead what i would choose to do is use avi as the ingress like i'm doing here and then simply use ontria or some other east-west communications plug-in in your environment other than nsx here i'm going to select my management network okay i'm gonna make mine static i believe either option will be supported for your environment okay so i'm gonna select my data center here i'm gonna select static routes and i'm gonna leave everything else at the default okay now i just need to select my management network this is going to be unique to your environment but i i have mine predetermined so i'm going to go ahead and type in a range all right all my ip info is put in here it looks good to me i've got a range of 10 addresses but if you're going to be deploying a lot of service engines you may want a larger range than that for your management network i'm not going to select multi-tenancy in my deployment i do believe it's supported but i don't need it because this is just a demo environment for me okay my setup is now complete now i should be able to log in to my avi controller and see that i've got my vcenter cloud created so i'm going to go to infrastructure clouds and lo and behold look at that i've got my vcenter cloud now it's still making some calls to my vcenter if i wait for just a moment this should turn green very soon all right my vcenter cloud is all green my next step is actually going to be to go to my networks and prep my networks okay now i should have already my management network prepped remember during the setup process i specified that this would be my management network but i also want to make sure that i'm prepping particularly my front end network this is the network where i've chosen to provision my my vips for avi so i need to give it ip addresses so that avi knows what ips to assign on that front end so all i need to do is come in and add a subnet that i'm going to deploy my vips on okay and this is basically going to be a pool of ips that as i'm provisioning and deprovisioning my load balancers or my ingresses in kubernetes this is where the ips are going to be pulled from all right so i've got my slash 23 here and i'm going to create a pool as well all right so i've created my pool my network appears to be prepped i'm going to go ahead and save that network and i look good you can see i've got 41 ips there in that pool ready to be consumed now my node network and this is going to be the the server side of the avi service engines right this is going to be where we're load balancing the traffic to i'm going to go and edit this because while i don't need to specify a pool i do need to let this know that dhcp is enabled on that network so that way the service engine can pull an ip on that network so forward traffic i believe you could specify a static pool here as well within the avi solution but since i've got dhcp running i'm just going to go ahead and rely on that okay so my networks are all prepped now what i need to do is specify a dns and ipam profile and tie that to my vsphere cloud so i'm going to go to templates ipam and dns profiles and i'm going to start by creating an ipam profile alright so i specified my ipam profile and i'm going to use the integrated ipam in avi i'm going to leave allocate to verff unselected in mine and all i need to do is select that default cloud all right i'm setting up my ipad profile for my front-end network here because that's the one that i'm going to be pulling my vips from primarily okay and i also need to create a dns profile i'm going to go ahead and pop into that dns profile section and create that now so i'm going to give it a name dns profile i'm going to use the integrated dns but again there are third-party dns's supported in avi and finally i'm just going to add my domain all right all right so this is the domain that i've chosen it's my corp.tanzu domain but then i've got tkg.nsxlb as the kind of sub domain there this configuration anyway and finally once you have these configured what you need to do is go back to your cloud connector for vcenter and simply add those profiles to your cloud connector so in the cloud connector i have the choice of selecting my ipam profile that i just created and i can also create my dns profile as well now i am enabling dhcp here because the documents say to do it and that should be it the one step that i didn't walk you all through is the creation of a service engine group i'm going to be using the default group in my enviro where i'm not really going to make any changes to my group okay i've got a enterprise license installed on this system but you may want to determine what sort of license that you have whether it's a basic essentials or an enterprise license and that will determine what you can actually do within the service engine group for example if you add basic or essentials you would be limited to active standby high availability but because i'm running an enterprise license i could use active active or implicit which is absolutely what we would recommend this should be all that i need i'm going to go ahead and save the another piece that i didn't highlight with you all is just that if you need to make significant changes or you need to upload licenses you can do that here okay if you're going to deploy essentials or basic you need to come in here and edit the licenses accordingly so that way your service engine group is using the correct license for your deployment model i have one final step for my controller setup and that is that i need to configure a certificate to do that i'm going to go to templates security what i need to do is create a certificate for my controller so i'm going to click controller certificate and i'm going to click create i'm going to put in the fqdn of my controller i have that configured in my dns provider i'm going to also have the same common name listed for that as well and i'm going to add that to the subject alternative name here too so i've added the fqdn of my controller to the name section the common name section and to the subject alternative name section and i'm going to save that certificate right now i'm also going to export this certificate because i'm going to need to get the certificate content from right here i'm going to copy that to my clipboard and i'm going to paste that into an additional text file here because i'm going to have to reference this during my setup process all right now i just need to go do my final step of actually assigning that certificate as the main certificate so i'm going to edit this what i'm going to do is go ahead and delete some of the default certificates so i'm going to kill this certificate kill this certificate and all i'm going to do is add that new certificate that i created in the previous screen this is going to make this the certificate that's used for the controller you can see as i refresh now i have to retrust that certificate that's been added to the controller i'll go ahead and do that okay and now i should be all set for my controller i've got all the necessary prerequisites in place and i should now be ready to continue my installation process so now i have all the prerequisites necessary in place to begin installing my first tonzo management cluster i'm going to be using the ui method there is also a cli installer for if you're maybe doing a lot of these deployments or if you want to automate the deployment i would highly recommend checking that out if this is your first time deploying a tkg cluster you may consider just using the ui method to become familiar with the inputs and the process that is required to set that up all right so i've got a putty session open to that tonzu vm where i've got my tonzo cli installed okay i've got this installed so all i need to do now that i've got all my prerequisites in place is to run this tonzo management cluster create command with the dash dash ui and what's going to happen is the installer is going to open up a web browser on my machine it by the way i'm using putty so that's linked to putty as the default browser and i'm going to go ahead and say i'm going to put this in a vsphere environment and now this is it i've gotten to the install process the prerequisites take much longer than the install now all i have to do is start putting in all those inputs that i gathered over the last few minutes and we'll have this cluster up and running in no time at all so first things first i need my vcenter address i'm going to take that from here i'm going to paste it in i'm again remember this is going to be running from this vm so it's very important that you have dns working on that vm or your setup is not going to work if you're specifying things like fqdns okay i just i keep highlighting that because that's a pretty common mistake during setup or maybe it's not common but it's a mistake that i made during setup um put your credentials in click connect make sure everything's good we're going to pull that certificate thumbprint here to pop that up okay and so now it's going to be it's going to ask do you want to configure vsphere with kansu or do you want to just deploy a tkg management cluster there are different methods and and really kind of two different ways you can deploy tkg on vsphere i'm going to be going with the tkg m solution which means i need to deploy my management cluster on a vsphere environment using this selection okay i need to select my data center that was pulled from my vcenter through this authentication all right now i just need to paste in my public key that i pulled from my ubuntu vm earlier in the video that i pasted to a notepad all right i'm going to go ahead and select next and now i'm going to choose whether this is development or production so i've chosen development i'm going to go and give my management cluster a name i'm also going to specify an ip for my control plane endpoint so this is where the kubernetes is going to be listening on a particular ip and that's going to be the api endpoint that we use to authenticate with this particular cluster all right and i'm going to use 100.99 for this because this is outside of my dhcp scope you'll notice this is actually on the same network that i'm going to use to put my nodes on that network has a dhcp scope so i'm just using an ip that's outside of that dhcp scope all right again i'm going to make these a small worker node type as well just because this is a dev deployment and i should be all set to move on now i need to put in the information for my nsx advanced load balancer so i'm going to pull the hostname of my nsx advanced load balancer which i've got right here in my dns so i'll pop that into my controller host my username and password that i created during setup i'm going to verify that it looks like i have fat fingered my password during the installation i forgot the password and now i'm going to have to reinstall my iv controller so let's take a brief brief break and we will reconvene at that interface hopefully with the correct password for my hobby controller after i reconfigure all of those settings okay everybody i have redeployed my obvi controller and as you can see i have now the verified uh i click the verify button you missed that part i wasn't recording but now i am authenticated to my ivy controller because i had the right password okay uh so i'm gonna go ahead and and make some selections so i need to select my avi cloud uh remember the i'm referencing here i'll show you in the controller as i go through this i'm referencing this cloud okay so this default cloud it this default cloud name is the same default cloud name you see right here that is being pulled from the uh the obvious controller all right now i'm going to do my service engine group again i'm just using the default service engine group vip network name okay so my vip network name is going to be this one i'm using my k8s front end network so i can copy that or you could type it in manually but it does need to match okay and then the vip network citer paste mine in here and finally the certificate authority so this is where earlier whenever i added a custom certificate to avi this is where i'm going to paste that certificate that i created again i'm using all self-signed certs so if you're in a real environment you might need to adjust this to conform to your requirements i'm popping that in there we should be all good i'm not going to add any cluster labels to my deployment because i'm going to keep mine really simple i should be good here i'll click next awesome now i am in optional metadata i actually i'm not going to use any optional metadata for mine okay cool so i'm specifying my resources i'm referencing this vm folder again this is the folder in my vcenter that has all of those photon images that i uploaded and converted to templates in the in the gui all right i'm also going to select my data store here it should be all set there i'm going to just select my management cluster i only have one cluster but you could if you wanted to specify a specific compute to put it on this is how you would do that all right and for my kubernetes network specify it's just going to be my work my k8 workload network this is going to be the network where my nodes are deployed to and i'm going to say let's leave proxy disabled as you can see we do have a couple of different options for identity management but i'm just going to keep it simple and do local admin okay and finally i need to select the image that i'm going to deploy so this particular version of tkg wants to deploy a version 1.20 compliant version of kubernetes so that is is what i'm referencing here i'm referencing my 1.20 image that's in vsphere as a template right click next i'm not going to register mine with tanzania mission control because i don't have access to tons of vision control so i'm not going to do that and okay now all we got to do is hope that we got all those inputs right as you can see the wizard itself only takes a couple seconds to fill out most of the work is in getting all the prerequisites up in place your networks your office control or your images etc let's see if it works here we go moment of truth oh i guess not i gotta review the configuration first i'm gonna assume i got it all right i'm gonna live dangerously today and by the way just so you know like this ui is not required you could do this purely via the cli and they even show you the equivalent cli command that you would have to run in order to make this happen all all this really is is a wizard to put in a bunch of inputs into a yaml file so that way we can build uh our management cluster based on those inputs so again if this is your first environment i think you should use the ui if you're not familiar uh with how to provision this sort of stuff already but if you've done it once before highly recommend using this method in fact a coworker of mine has a great blog virtually ghetto i would go to that blog and he has the step-by-step process for how you could automate this deployment using the command line all right so here we go moment of truth i'm going to deploy my management cluster now all we have to do is wait okay what's going to happen is the the wizard is going to go through its process it's going to prep the bootstrap environment it's going to do all of its authentication and ultimately what we should see happening assuming that i didn't mess anything up is in a few moments as this continues we're going to eventually see that this bootstrap environment will be deploying those vms that i specified those those two kubernetes nodes that are going to be my management cluster so um right now i'm just going to wait around for a while i'll be back to show you exactly what this wizard spits out here and probably last time i did this it took actually about an hour i'm in a highly nested very low resource environment though so it'll probably be faster if you're doing this on real hardware so stay tuned we'll be right back all right i got great news it worked so it looks like it took me i i don't know the exact amount of time probably 30 or 45 minutes it went through the process i now have my management cluster up and running so i can show you in vcenter actually you can see i've got my two vms so now um what i can do is start provisioning new clusters i'm gonna do that part tomorrow because this has been a long day uh you'll see me wearing a different shirt in just a second and then i'm going to show you how you can stand up your very first workload cluster all right bye okay everybody i'm back and in a much better mood um hey one thing i wanted to kind of call out i didn't dig into this before but but i'd like to show it to you because i'm going to explain it in the cli here in just a moment um i've talked about ako so ako is the avi kubernetes operator right that is the system that coordinates whenever you provision ingress or load balancing objects and kubernetes to configure that in the controller okay there is an avi kubernetes operator and and that goes in the workload cluster in this type of a deployment but in the management cluster what we have is not called an avi kubernetes operator it's called an avi kubernetes operator operator or ako right the reason for that is because when you deploy workload clusters that ako is really what's responsible for deploying ako so in my environment it'll deploy this one implementation of ako but you can use the ako operator to just continue redeploying additional ako pods into your clusters all right so i'm in my bootstrap system this is where i installed the first management cluster from so from here i'm going to run a few commands to explore the management cluster show you kind of how it is initially provisioned and then we will go into provisioning our very first workload cluster so we can actually deploy an app behind avi all right so i'm in my workload cluster and by the way uh i've already done this but if you've timed out or if you need to log in from a different system you you would just need to type this tanzu tanzu login command and and select your cluster that you want to log into okay to get my management cluster information i'm just going to run this command tanzu management cluster get and you'll see it's going to spit back from some details about my management cluster okay now i'm i have my management cluster via the tanzu cli but in order to actually interact with that cluster via coupe cuddle which is how how generally admins would interact with the the cluster itself i need to log into that cluster in order to log into that cluster what i need to do is run this command tanzu management cluster config get so i'm going to get the kube config for that cluster i'm running this dash data admin because i'm running everything as admin i don't have any authentication but if you have authentication you may not necessarily need that flag so i'm going to run that and it's going to spit back hey you you have the credentials for that cluster and it's going to tell me if you'd like to access that cluster simply run the following command so i'm going to run that command and i'm now inside of my management cluster you can see if i want to get pods you'll see all the various pods in my management cluster all right now i'm not going to spend too much time inside of my management cluster the the focus of this is going to be the workload cluster but this is a kubernetes cluster so i could start provisioning pods here if i wanted to i could provision my my own namespace whatever i need to provision within kubernetes i could start deploying in this management cluster the one thing that i want to show you in this management cluster is this so i've run run to get pods on my namespace uh tkg system networking that's where we put the ako operator so ako in the management cluster and you and i just ran that to show you right here this is my ako operator so remember this is not ako this is ako so this is the operator that's responsible for deploying the other ako environment this one right here this is what i'm talking about this is ako this is ako not the same thing okay so you've gotten your management cluster provisioned and you can access it now the very next step would be to deploy a workload cluster okay now in order to deploy a workload cluster we're actually going to follow a process very very similar to that of deploying a management cluster but instead of using the the user interface i'm going to use the yaml file this time so you know when you provision your management cluster all of the inputs that you put into that user interface are actually being saved to a yaml file so if i go ahead and run an ls on this directory this is the directory that the yaml that was generated for your management cluster has been spit out too so i see this file it has kind of like a random object id so what i can do is i'm just going to cap this file so you can see what's inside of it and eventually we're actually going to edit this file to turn it into a brand new cluster all right so i'm inside the file and you can see this is really just a yaml file that has a bunch of variables specified all right this is where you if you wanted to make changes to your configuration as an example if you had a different vip network that you wanted to leverage for a certain cluster maybe you have a prod vip network and a dev vip network or something like that you could come in here to edit the network settings and the subnet mask prior to deploying your workload cluster all right i'm going to keep basically all of my config exactly the same there's only a couple of things that i'm going to have to change in order to use this file to deploy my first workload cluster so what i'm going to do is just make a copy of the original file and i'm going to save it to another file and i'm just going to name this file my first cluster.aml all right so now i should have that yaml file so all i need to do now is go inside of that emf file and edit a couple of variables so that way i can deploy this file as a workload cluster so i'm just going to get into inside of the file so i'm going to navigate to that file that i just created that myfirstcluster.yaml and uh since i'm keeping most of my settings the same as my management cluster there's only a couple of small things that i need to change for me what i'm going to have to change is the name because i can't have two clusters with the same name in tkg so for example this tkg-mgt i'm going to have to change to that my first cluster i'd like it to be listed as that and there is one other variable that i need to change i need to change this control plane endpoint so when you create a cluster you're also specifying an ip endpoint so that you can authenticate with that specific cluster that's how a cluster listens on the api so i can't have two different clusters with the same api ip so i need to edit this so all i'm going to do is use the dot 98 ip because that's a free ip in my environment and i'm leaving all of my other settings the same okay so i've edited my aml file now all that's left is to reference that file when deploying a workload cluster and we should have a workload cluster deployed with avi ako installed let's give it a try all right so i'm going to paste my command in here and this is the the tanzu command for creating a cluster just tanzu cluster create and then i need to specify the file so i'm going to go ahead and paste in the file path for that myfirstcluster.yaml file that i created a moment ago all right that looks good i also have an additional flag because i would like to deploy version 19 of kubernetes or 1.19 the version of kubernetes the reason i'd like to deploy that in my environment is strictly because um the ingress and load balancing objects in kubernetes have continued to change over time this is the the latest version of uh kubernetes that works with akm i've got my command in place okay so now i just gotta press enter is that going to validate my configuration and if i got everything correct it's going to begin configuring that cluster in my vsphere environment okay hey just as a reminder while this is provisioning you see here how i specified the revision as being this version 1.19.18 well i i pulled this from the documentation this is the the syntax that we need to reference that image but just so you know in order for this to work you would need to have a template in your vcenter that ref that is that version of kubernetes so i have that or a node with running that version so i've got that right here so we just need to wait and see if there are no issues we should start to see my nodes be deployed here in just a moment and would you look at that right on time my first uh the control node so the control kubernetes node for that workload cluster that i just created is being deployed you can see has that name my-first-cl so i'm just going to hang tight for a few minutes it doesn't take very long to deploy workload clusters compared to your first management cluster workload clusters i've seen usually are deployed in less than five minutes in my environment because it doesn't have to go through the same startup process so i'm gonna pause the video here i'll be back in just a second and then we'll be ready to deploy our very first application in the cluster all right my cluster is up check it out workload cluster successfully created i can show you this in my vcenter i've got my two nodes i've just got a one control plane node and then one worker node in my environment and uh this is great news so now the cluster is there now i need to go through that same process of getting my kubecon for that cluster so that way i can log in so i'm just going to run in the command to get that right now you can see it spit that back it says that i've saved the conf so now i just need to run this command and i will be authenticated with the context of my worker cluster bada bing bada boom let's see what we got all right it looks like i'm in my cluster i can see all my default pods and in those default pods we've got a special guest check them out this is my a k o pod so this is that pod that's going to be configuring my ivy controller when i start specifying ingress and load balancer objects in my manifest files in kubernetes all right so i've got my cluster up it's sitting there waiting so my very next step is just to go ahead and create an application so i've got my manifest file i'm going to go ahead and save it into my cluster and then we're going to create our very first application hey just as a side note everyone i did create a secret to authenticate with a public docker registry so that way i could pull my pods whenever i run my manifest file here in a second if you have harbor or some other registry you can authenticate with that but in my environment i didn't have that set up i was hitting my macs for pulling images from the public registry so i had to authenticate with it so i could increase that maximum okay so i have my manifest file okay here's my spec this has again this is the one that i went through in the in the demo video there's no changes to this so this is simply listening on two different services underneath that service i've got an ingress that's referencing avi with a slash old url and then a default url and then finally i've got two different pods that are going to be running in the background one is running the most recent version of nginx while the other is running an older version just a second ago where i showed you how i authenticated with a public registry so i could pull these images you can see here where i've put that secret that red cred that i created right here into my spec okay again if you have a private registry already set up as the default you don't need to do this but i did not so that's why i had to go through that step so now all i'm going to do is copy and paste this into a file inside of my system i'll go ahead and check the file just to make 100 sure that everything is all set looks like i got my two services yeah everything looks good so now is the moment of truth i'm going to provision this application that i just pasted in with the spec file that yaml file the ako split ingress and we'll see if the deployment works there's only one more thing that i want to show you all right i have also needed to edit my hosts file in windows okay if you're not familiar with this in windows you can keep a basically text file to add static dns entries the reason that i'm doing this is because i did not set avi to be authoritative for the domain okay and in order to access the web page i am going to need some kind of dns record to make that work so i've just added it manually here but what i've seen other people do which is very cool is in avi you can just deploy a dns service and then in your dns you can make that avi dns service authoritative for whatever the subdomain you're using is so for example for me um maybe my sub domain is this i'm default tkg.nx nsxlb and so i could from my corp.local domain appoint it to my avi service engine to resolve any requests from this subdomain and that should be it now is the moment of truth i'm going to go ahead and use that spec file to deploy my application so i'm going to use that faithful command coop cuddle apply f ako split ingress ask and i'm going to press enter and we'll see what happens so press enter and it looks like my containers are slowly spinning up here i'll give them just another moment all right so my containers are running now it's time to test out my app so first and foremost i'll go ahead and open up my avi deployment here and we should see that because i specified those two different back ends i now have my two pool members one is for that traditional kind of default and one is for that slash old url okay i can pop in here and see here is the ip that i was given and we can map that ip back to the hosts file right so you can see that's where i created the dns record for that ip okay and the dns record does match the app domain name so i should fingers crossed be able to access this application now so i'm going to go over to my windows here click refresh and look at that my app is up and running my packets are flowing right through my avi load balancer so i'll click refresh a couple times i've got my nginx config running on my main url and if i go to my slash old i should get some errors that show that i'm actually running this older version of nginx on that slash old url okay i kind of walk you through what this does in avi last time one thing i didn't show you last time which i i personally think is pretty compelling is you know with this if you begin using the service you're going to get all of the logs and analytics that you would expect from your your avi system okay i don't have a lot of traffic hitting this vip right now so i don't have a tremendous amount of uh data coming into the service but if you spin this up in your environment you're going to be keeping all of transaction logs right so i can even see you know in my environment i know that i know why this isn't working but if this is a real environment you could use these analytics to see every single transaction and every single 404 response that we're getting to troubleshoot this application a little bit further pretty powerful stuff i'm really helps highlight the probably the reason why you might choose to leverage avi analytics are right at the top of the list okay okay that's it for my final trick i'm just going to go ahead and delete that deployment that i made good code delete on the file and my pods should be spun down if i go back to my obvi controller you'll see my pool should be gone yep my pools are gone and nothing's nothing's working all right so you can see it's been spun down i i no longer have my back in pools tied to the service uh this was all deleted automatically by ako i'm going to show you one more thing just because i found this a very useful tool for learning how the system works i'm going to pop into the operations tab and then from there i'm going to go to events and i'm going to go to look at my events and i'm actually going to even dig deeper and go to the config audit trail okay so the reason i'm coming in here is because if you wanted to see the exact changes right that are being made by the ako user so you can see the user account is logging in to my audit controller and making changes i find this to be very very useful because you can go in and see the exact api calls and the exact changes that are happening behind the scenes whenever you spin up your application in kubernetes so all of these config changes that can be seen in this audit trail are actually the ako system in my cluster making these calls to my avi controller deploying spinning vips up and spinning vips down so pretty cool stuff i think this is a useful learning tool if you want to really dive deep and understand what's happening behind the scenes all right everybody that's all i got for you on this one this has been a real pleasure to work on i learned a lot during this process and if you have any questions about ako about tkg please feel welcome to reach out to me directly and until next time happy ako happy load balancing happy cloud native applications and happy april 16th all right bye
Info
Channel: Trevor Spires
Views: 2,762
Rating: undefined out of 5
Keywords: kubernetes, tkg, installation, install guide, vmware, tanzu, avi networks, ingress, load balancing, AKO, AKOO, avi, nsx alb, nsx advanced load balancer
Id: yvjQSAE6UUs
Channel Id: undefined
Length: 46min 55sec (2815 seconds)
Published: Tue Apr 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.