Feature Friday Episode 68 - Container Service Extension 3 1 1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi and welcome to feature fridays my name is guy bartram director of product marketing and today i'm delighted to be joined by sashi bats sashi please introduce yourself sure uh hi everyone i'm sachi i'm technical product manager with vmware i'm mainly focusing on trans integration with vmware cloud detector products awesome and that's what we're here to talk about today the release of uh container service extension 311 and the cool new capabilities it brings to our cloud providers so as we go through this presentation today in this chat let's let's kind of just put ourselves in the shoes of a cloud provider who's got vcd today and is looking at starting their journey with tanzu basic which is now included in the flex core um so it's basically free within those seven points so how would they like leverage this cse and tanzu and deliver a kubernetes service let's kind of keep that in mind yeah sure so uh cac 311 brings in a lot of new features that supports uh tkg uh m uh clusters deployment support and brings in functions to automate invest load balancing provide support for persistent volumes and name the disk for the kubernetes cluster and the highlight is that we can without making any changes in underlying infrastructure if you have say vcd 1031 release uh nsxe advanced load balancer with a basic licensing and the nsxt for networking you can actually introduce container service extension and start providing kubernetes as a service or enable your customers to self-service the tanzu kubernetes clusters using container service extension and for this you don't need to install tkgm management cluster so the way it will work is you have your underlying infrastructure your vcd nsxt vc uh advanced load balancer on top of that you are going to install container service extension that will provide you the container sorry the kubernetes orchestration and right now we don't need a tkgf management cluster separately to be um to run those tkg clusters okay so if i've got a production estate already and i'm delivering vcd infrastructure services this is download the csc plugin deploy the plugin in vcd um make sure my my vsphere level out there is up to the level that supports tanzu and turn it on i think you have to actually turn it on on each host if i'm if i remember rightly so for this you don't need to make any changes to your underlying infrastructure okay so right now all the orchestration is taking place by container service extension so we don't need to go and uh enable vsphere with kubernetes or any any additional configuration on vc is not required today okay so that's cool so i can actually just create an or vdz and just start boom off i go kubernetes is here that's exactly right okay that's great okay so all right so just a rewind i'm delivering infrastructure i've got my pvdcs my customers with their org vdcs i'm going to deploy the container service extension um i'm going to then just automatically bcd then be able to deliver a kubernetes cluster in an org vdc for my customer yes that is correct cool and right so we'll just let the nomenclature let's go for the nomenclature here so i've got tanzi basic which is included in the flex core and then we've got tanzu uh standard which is coming soon but in this scenario we're only talking terms of basic um and we're talking about tanzu kubernetes grid m for multi-talent yeah we're not worrying about the tkg-s component it's just all tkg m from now on yes okay uh tkgm actually uh in tanza world it means multi-cloud but we are transferring and basic we only support uh vsphere or on-prem infrastructure okay good that makes sense yeah all right let's let's dive into it then sure uh i have a couple of slides that i would like to cover sure all right all right so these are the three main introductions or new updates to cac 311 uh cse 311 will work with uh vmware cloud director release 103.1 and uh we are introducing two main plugins to automate storage and uh load balancing one is the cpi plugin that will work in conjunction with uh nsxt advanced load balancer to create virtual services for your applications that's running on the kubernetes cluster and the csi plugin will work to provide persistent volume or named disk on your existing storage um when you are creating the volumes on the kubernetes cluster and for cse native we have the we are dropping support for photon os but on the other end for tkg cluster support uh we allow your our customers to upload the official tkg supported templates to the catalog uh that is going to be consumed by container service extension so we are basically um letting the provider select the templates from the official vmware download site these are all the tkg official templates that you can get and then use those templates to provide a production ready kubernetes clusters using csc and just uh the cpi so this is nsxt advanced load balancing the avi solution which is integrated with vcd yeah this is doing the container ingress load balancing yeah so uh we still need to install uh the ingress controller can it can be engine x or contour but it uh what cpi introduces is uh automated load balancing configuration on vcd so when you install uh ingress controller or additional applications like wordpress wordpress or any other application that requires ingress access what this cpi will work with nsx tlb is that we'll create nat rule and load balancing service so it will actually map it to the external network that's available to the customer organization so we are basically without any uh human intervention we are going to um create those virtual services which uh can be accessed by uh customers externally oh that's pretty cool okay that makes a lot of sense and yeah system volumes i mean that's kind of like a new thing for um containers which were obviously um previous like non-persistent and now having persistent storage is gonna really help with the kind of longevity of those containers that is correct so you can use uh persistent volumes to persist some data on the cluster you can actually uh enable data protection or backup those important information that's using the persistent volume you can actually persist through disks and recreate that later in case your part gets deleted and this is i guess where the new object storage extension plug-in for vcd comes in with that ability to backup containers yes okay okay and not only that we have transformation control uh generally available today to our partners through cloud partner navigator so in case you are managing this cluster which by the way is supported you can attach this tkg cluster to transmission control and uh install uh the agents all the agents from the transmission control you can enable data protection on tmc to perform backup and restore of uh the whole name space or persistent volumes and restore it at any point in case it's required okay so multiple options there for for doing this is tkg is that a at a cost for service providers uh sorry times mission control is that a a cost or is that a a free i know they need an msp contract to get it but is it a zero cost contract uh no i think you need to sign up for some term subscription before you can access tmc okay so maybe the starting point maybe it would be to look at osc you know if you want to trial the the backup of persistent volumes then object storage extension the latest version will do that uh and then if you want to do kind of more and get more visibility then look at terms times emission control that is correct yeah transfer mission control also brings in a lot of uh added values where you can um manage policies uh registry support and a lot of additional features cool okay yeah the next one is um i'll cover a few updates to how a container service installation takes place today so on a high level step one is to install container service extension um on your server then you create a config file to connect to your vmware cloud director and then you upload some templates on the catalog on your vmware cloud director and these templates are the kubernetes templates it can be native templates that is provided by container service extension these are the native ones that you can see under remote template cookbook url this is for the native support whereas you can separately upload tkg templates that you can download from official vmware site download site so tell me if i essentially if i was new to this what would be the the benefit then of of looking at those default communities native templates first tanzania what would be the difference there yeah so the native templates uh these are upstream kubernetes templates um they support and if like tkg introduces added benefits of uh having the advanced uh load balancing capabilities persistent volume um the the tkg templates are managed and hardened by the tanzu community so you get support on those tkg templates when you are running those clusters whereas upstream kubernetes clusters are the basic kubernetes templates um they don't have the uh integrated load balancing you have to actually manage those rules uh manually on vmware cloud director right and then your support is community support best effort sort of thing yeah yes okay yeah so that's that's the template that we talked about um for tkg template we are removing dependency on the underlying infrastructure so we don't need to communicate to vc to import these templates so we introduced a new mode for tkg specifically where we say that vc communication is not required all this is built into the configuration specification that will connect to your vmware cloud director once the install is complete you can run the kubernetes server and you can actually upload the templates to your vmware cloud director and then start providing the services in order to um so now i'll be moving over to what additional rights are required to run these services in the tenant organization we have added a few more rights for uh each capability uh first one for the persistent volume support we need to allow the tenant user or a user who is going to manage this kubernetes cluster to create a shared disk so this this will allow them to create a named disk uh when the volume is being created from the kubernetes cluster and authentication will happen through the api token for this specific user so we need to allow this map right that says manage users own api tokens and the tenant admin can probably have all users api tokens but for cluster author role at least we need this own api token role to be available to the user for the cpi plugin we need access to the gateway to view the gateway and actually configure the nat rule and the load balancer so that this load balancing services and net rules are automated uh and authorized for this user role so in case these like after these rights are published to your user you can now deploy a kubernetes cluster and this is just a revision to what we um have published to our cluster role already so so these are the things that will be required for the users in the customer organization once these rights are available to them they can now use uh container service extension the ui plug-in and start deploying the clusters okay cool and i guess these are um rights you'd need to set for every uh every tenant that's going to have kubernetes as a cluster services right that is correct so uh i'll walk through that in the demo when it comes it's very easy to publish the rights bundles it's it's a one-time operation when you onboard your customer organization onto the city okay um we will go through uh networking and load balancing in next session so first one is uh we with container service extension uh with nsxt and advanced load balancer there is no need to change your underlying infrastructure you can deploy the kubernetes clusters using your local routed isolated or the direct connected network in uh this release we are also introducing a new option to actually automate the natural creation so in case a customer is using routed network to deploy kubernetes cluster and they would like to allow inbound access to this cluster for services access the nat rule can be created automatically by cse so you don't have to and this uh nat rule is managed its lifecycle is also managed by cse so once you delete the cluster that rule will be deleted so this allows a tenant user to expose the cluster while they are creating the cluster yeah and the next one is load balancing so this is completely new in cac 311 um so in case you have uh nsxt advanced load balancer with basic license you can consume that today in conjunction with cpi plugin on the csc the only requirement here is that we when we create the service engine policy it has to be dedicated uh um per tenant uh when we create the service engine group okay so there is a i think there's a limit in the number of service engine groups you can have per instance of the advanced load balancer so that may be something to consider architecturally when you're looking at this as well right yeah that is correct yeah okay yeah and in the demo we will see uh how we are going to create nat rule and the virtual services on demand when we deploy any application okay yeah and these are considerations for upgrade to csc 311 as a recap uh cse311 works with vcd version 10.3.1 and we have uh introduced tkg in cac version 3.04 in case you are planning to upgrade from that to 311 there are some steps that's uh available so first of all we would like you to back up the cluster as in if you have any stateful application that you would like to preserve you can um take a backup of the cluster and then delete all the clusters uh you delete any association with your csc uh that's running on the old version you remove the vm placement policy and then you can upgrade the container service extension upload all the new templates that's available to you from the vmware customer portal and restart the csc and then you can start using csc on 311 release so this this looks like kind of quite a um quite an operation actually um is this documented on the user guides and things like that okay all right so right now this is uh documented on the official github website uh that also lists all the uh support supported vcd versions all the guided supported templates and also uh all the guidelines for how to use upgrade and things like that okay and is there a chance that you know users before who have got 3.0.4 maybe have made changes to kind of placement policies are they going to be able to then put those changes back in are they going to have to do that manually so the placement policies will be created by the placement policy will be created uh once you upgrade the csc 2 3 1 1 so you don't have to manually create those policy the upgrade yeah yeah but if i've made any edits to the policy before um i guess you need to kind of re-edit new policies when they're when they're deployed uh that is correct uh the vm placement policy to deploy tkg cluster is mainly managed by cse so most likely in case you have customized this you need to redo that after the upgrade okay and to to back up the clusters um is that a question of sorry i'm not i'm trying to work out how how to actually do a backup here of the stateful clusters uh if you haven't got osc or you you can use a third party backup solution i guess to back it up is that what we're suggesting yes okay yeah any uh kubernetes supported uh backup process can be used to backup your stateful application yeah all right okay now we can dive into the demo in just a second [Music] all right so right now i'm logged into vmware cloud director and i have installed a container service extension and the ui plugin is also upgraded to support um tkg temp uh templates deployment so this is the version for um ui plugin and the rights bundles can be published from the administrator administration we have this native entitlement on top of that you can also add the rights we spoke about in the uh presentation so cse native cluster entitlement uh will have the rights bundles necessary to run these capabilities so in case you are enabling your customer organization to use the cpi and csi plugins you can actually publish these additional rights from the provider portal so i am just highlighting those one more time um where you can find them and so so as a as a provider then i can make it optional for them to have access to cpi and cs csi that is correct so only customers who are going to use uh um so this provides little flexibility to your pro provider that they can select in case uh whether they want to publish tkg cluster rights to your customer or not so now what i'm saying is you you could say um right you've got access to tkg cluster but you haven't got access to load balancing for that cluster uh for tkg it requires that you need load balancer but uh the nsx advanced level service so they can they have option to actually so this holds true for persistent volume i'm not sure about the load balancer so we can use the example of persistent volume okay okay yeah so um this gives a provider some flexibility where you can provide specific capabilities to your customer for example you want to allow your customer to use persistent volume creation so you can go and allow your user to actually have the uh shared disk support and then um the uh using the um which one was that um the api token under general yeah so as a in case you want to publish a persistent volume support you can actually allow your uh user to have create a shared disk or and on top of that manage their own api tokens so these roles are needed to uh create a persistent volume disk so once so provider has option to actually have added service to your cluster yeah so this this gives them options yeah so it could be a nice upsell for a service yeah okay that's what i was trying to understand yeah yeah so that's uh something how you can do from the right bundle um and now let's see the cluster creation workflow before that i would like to show you what i have set up for a load balancer so over here i have avi controller this license can be basic to work for the ingress load balancing for time to basic i have created uh nsxt clouds and my service engine groups and then as you mentioned we can have up to maximum hundred virtual services and then once this is configured customer provider can login to their customer portal and enable load balancing so something they can do it by logging into the tenant portal navigating to networking edge gateways and over here they can enable uh load balancer and then provide uh service engine group service engine group access so once this is set up by provider everything will be automated um starting from this point on okay so you could set up a service engines for your service engine groups for each customer if you wanted to and then not give them access also if you if you if you chose to um but upsell them access to a service engine group and then enable it in here so they could actually start using it yeah so this will uh use a cpi um this will work with cpi to automate the inverse load balancing yeah okay okay all right now we can login to um the tenant portal to see the customer experience to create a kubernetes cluster so let's open the tenant portal all right so i'm now logged into a tenant portal using kubernetes cluster author and then i will navigate to kubernetes container clusters this is the ui plugin that my provider has uh allowed access to so now you can see two tiles uh just with uh cse uh install one is native so this is for upstream kubernetes uh you can use this for poc or testing purposes and these are production ready um vmware uh tanzan kubernetes grid specific clusters that you can create so the templates are these are uploaded from csc template import command so you can manually upload all the versions that you would like to offer to your customers all the tkg supported templates can be used here um from tkg suit so let's call it a demo cluster select a rvdc you can select the number of nodes you can use sizing policy or you can provide uh manual values for your cpu and memory resources so this is something cool we don't have to create a sizing policy in case you were to use custom values create um provider storage profile and then this option will actually create a nat rule to expose your router network uh uh clusters ip with external ip that's dedicated to your customer organization so this is very efficient and then uh you can also customize the balance service networks we review the information and that's all and it'll give you the external ip when it's complete or you'd go into the settings and have a look and see what it was yeah so right now this cluster is being created uh let's look at the cluster that's already been created to showcase what uh nat rule is going to look like so when we click on the deployed cluster we can see the control plane gateway ip when we enable the uh expose the cluster option this ip is going to be fetched from the available external network ip pool in your organization okay so to see that rule we can navigate to uh networking uh we can go to edge gateways and look at the nat rule so over here we can we noted down the ip i think it was 10.150.18.133 and you can see this got created by csc plugin right and when we delete a cluster this uh rule will be deleted as well and if it's accidentally deleted in the meantime it's csc monitoring it and we'll recreate it like uh like we had before with the uh network groups uh no that functionality is not there yet okay right so don't delete it because yeah because there we are actually using the vc refresh as a trigger so when we recover oh i see yeah here there is no trigger uh to actually check but you brought up a good point we can take it back to engineering okay cool yeah all right so let's look at a cluster that we already have meanwhile so like before we can download the queue config of any cluster that we deployed and then whatever ingress uh applications or ingress controllers that we deployed um we can see those virtual ips are being updated over here and one additional thing what we do is per cluster we secure the ingress access to these applications so for that my cluster author needs to create upload a new certificate to your certificate library we can do that by copying this cluster id go into administration and then in the certificates library you need to upload that certificate over here by the name of the cluster id so in case uh so this will secure your uh ingress access to your kubernetes uh clusters application and this is a manual step right yeah as of now this is a manual step uh i'm sure this process will be uh improved over time okay so let's look at the cluster that we deployed right so over here so i have already downloaded um my cube config and i have loaded it in my local q config file over here and we can look take a closer look at the parts that we got that got created when we deployed the cluster so a bare kubernetes cluster will come with uh cni the entria uh code dns the csi plugin and the cpi plugin over here so once the cluster is in up and running mode we can see all the nodes over here and then um in this example i have also installed uh nginx ingress controller and we can see and then we can see the external ip so when i installed the ingress controller over here what happens is um two things will happen on the customer portal number one is again on the edge gateway uh in the net section actually we can show the virtual services first so when we install the ingress uh controller in this example it was nginx uh we can see that uh virtual service is created for ingress controller for to two services http and https also to expose this virtual services with external network ip we create a nitro so we can again navigate to the net section and over here we can look at the ingress controller nat ip over here so i'm trying to showcase the idea yeah so over here we can see that for the ingress controller that's automatic that's pretty cool that's that's all automatic yeah yeah that's pretty nice and then on top of that whatever applications you deploy in this example i have deployed two wordpress applications so we can see uh the external ips are being mapped and then for each of these applications additional nat rules are also present on your edge gateway so you can see 135 being mapped for the services even 134 and 136. so all these are automated and when you delete those applications these the lifecycle of these uh virtual services and that will follow through that little time saver isn't it yes and uh in case you were to uh use the persistent volume uh you need to create a storage class file so over here i have already specified that you can find this on the cse github for example storage uh definition so once you provide this uh you can actually see the storage class defined for your kubernetes cluster so once the storage class is created uh you can start consuming this class um to create uh disks uh on your kubernetes cluster and once you create the the persistent volume disk you can start noticing them over here under the kubernetes cluster and there's nothing special about that that storage right it's just the part of the pvdc yeah yeah well it uses uh para virtual scuzzy disk to create the storage so underlying infrastructure has to uh support that but there is nothing added configuration required from provider today and then when we create additional disks you can uh notice it here or under uh the applications you can list out all the named disk and can you then apply like the storage iops controls and things like that on there um so i'm not sure about that okay yeah yeah so these are the main uh features that got introduced in csc 311 and it brings in a lot of value uh because we never had uh automated load balancing on cd uh for the kubernetes services so i'm pretty excited um how customers are going to use it yeah absolutely and and just to confirm right that was uh you had an enterprise license of um nsx advanced load balance there but it would work with the basic license because i know i know the enterprise is quite considerable cost yes yeah yeah definitely all this will work for earlier for uh l4 load balancing with uh basic today so there is no added licensing required for uh at least cpi to work yeah okay brilliant yeah i agree i think it's it's it's it's getting to a point now where you know you're looking at well creating serious production grade applications that require load balancing and require persistence and backup and things like that that's all now nicely tied together with csc and ose as well preventing the backups potentially yes yeah great listen cersei thank you so much for showing us through the 311 updates i think it's uh it's really exciting to see the product come on yep thank you no problem cheers cheers [Music] you
Info
Channel: VMware Cloud Provider
Views: 6,351
Rating: undefined out of 5
Keywords: vmware, VMware cloud director, Container Service Extension, VMware Cloud Providers, VMware Cloud Director
Id: nzQrazpnzNY
Channel Id: undefined
Length: 41min 49sec (2509 seconds)
Published: Fri Nov 05 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.