Wildcard Certificates with Traefik + cert-manager + Let's Encrypt in Kubernetes Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Great video! It help me set up certs on my new Traefik install.

👍︎︎ 1 👤︎︎ u/RedKomrad 📅︎︎ Mar 13 2023 🗫︎ replies
Captions
you landed here because you want to configure certificates so you can access your services in your kubernetes cluster securely well you've come to the right place configuring certificates in kubernetes can be challenging but today we're going to break it all down end to end to get your cluster trusted certificates we're going to use the ever-popular reverse proxy traffic as our kubernetes ingress so we can route web facing or internal traffic to our kubernetes cluster using traffic we can easily create https endpoints to route our traffic securely which certificates are we going to use we're going to use certificates that are provisioned absolutely free from let's encrypt the open source secure and automated way to provide tls certificates but we won't store those certificates in traffic no no no not this time we want a highly available traffic instance and in order to do that we can't store them in kubernetes storage so that's where cert manager comes in cert manager is an open source cloud native certificate management system for kubernetes it can obtain certificates from a variety of issuers both public and private and ensure that the certificates are up to date and if they're not it will automatically renew them before the expiration date or use a dns provider like cloudflare but domain verification can be done with many different providers and we're going to make this highly available all of it from traffic to cert manager to even a workload secured with our newly obtained certificates i've spent the last few weeks creating this tutorial to make it as easy and as repeatable as possible so by the end of this video you'll have a highly available way to secure your services in kubernetes also i've include all of my resources you need on my documentation site but before we jump in a huge thanks to our sponsor the tree for making today's video possible how many times have you applied a kubernetes configuration only to realize later that it was misconfigured not configured according to best practices or just plain wrong these types of misconfigurations can create engineering churn and possibly even downtime that's where the tree can help the tree is an open source tool that prevents kubernetes misconfigurations from ever reaching your kubernetes cluster it does this by scanning objects against essentially managed policy this policy comes with kubernetes best practices built in but it's flexible enough so that teams can customize this policy according to your organization's needs and detree isn't just a simple yammel enter along with the ammo validation it does schema validation as well as checking against your configured policy the tree also comes with a fancy dashboard that is backed by great documentation to help you fix errors fast it installs in seconds and can be run from a cli from coop control in your ci and cd pipeline and even as a kubernetes admission hook that can intercept and test kubernetes manifest even in this last mile to sum it all up the tree can help prevent kubernetes misconfigurations from happening in the first place so help empower your engineers by installing the tree today so what do we need to get started first we'll need a kubernetes cluster i've created a simple way to get a highly available k3s cluster using ansible that comes with everything you need to get started but any kubernetes cluster will do next we'll want to be sure we have a few tools installed we're going to use home which is a package manager for kubernetes this will help us install packages into our kubernetes cluster so first we'll want to run a kube control git nodes to make sure we can communicate with our kubernetes cluster then we'll want to run a helm version to make sure that helm is installed and working now it's also important to mention that you should have metal lb installed metal of b is a load balancer that allows you to communicate with your kubernetes cluster from the outside if you ran my k3s ansible playbook it installs it for you so first we want to install traffic remember that's our kubernetes ingress that allows us to communicate with our services inside of our cluster so first we'll create a namespace for traffic and then after that you'll want to make sure that you can see your namespace in the list and we can then we'll want to add traffic to our repositories on our local machine then we'll want to update our local repositories with helm repo update then we're going to install traffic but we're going to provide some values as you can see from that argument dash dash values equals value.yaml and this is the values.yaml that we're going to use so i've stripped most of this down to the basics so let's talk about what some of these flags mean so first we're setting up some global arguments that will pass to the binary traffic when it starts we're saying send anonymous usage equals false and check new versions equals false now this is totally optional if you want to send anonymous usage to traffic you're absolutely welcome to and the same with checking for new versions i turn it off next are some additional arguments and this argument right here is really just to skip verification of untrusted certs this is basically telling traffic to hey if you see an insecure or a self-signed certificate in between allow it to pass through and i'm allowing this because some of my services behind traffic will have a self-signed certificate so this is kind of a way around that next i'm setting the log level to debug really this should be info once you get it working but it's really nice to keep it in debug mode while you're actually setting this up in case you run into some problems but technically this should be at info next are some flags for our deployments so enabled obviously true we want to enable traffic deployment replicas is set to one we can set it to three i said we were going to make this h a so we might as well go three annotations pot annotations additional containers and init containers those are all empty but i'm leaving them there just in case i need them in the future i don't have to go and look up to see what each of these values are i'll see the keys and fill it in and for ports these are ports to expose from traffic so i'm saying traffic should expose the web port but then redirect to web secure port which is tls enabled true so this is saying that all traffic coming into traffic should be redirected to the tls endpoint so that means if someone tries to make a request over http it's going to redirect automatically to https and we're doing this here at this level for all of traffic next is actually disabling the dashboard on the ingress route this one's a little bit confusing but if you use ingress route which is also a little bit confusing you have the option to enable the dashboard here through an ingress route so two things first of all i do use an ingress route it's a little more advanced than just an ingress so kubernetes has an object called an ingress and traffic can be used for that ingress however i like to go the ingress route because you get a little more control it's a little more advanced and i get to write all my routes declaratively i could write them declaratively as an ingress but i get a few more options with ingress route really up to you i choose ingress route and i think we're touching on a few flags that are later on so i'll probably mention this again when we get there anyways long story short i'm disabling the ingress route for the dashboard here because we'll configure it later on next we're setting up providers so we're telling it hey do we want to install the kubernetes crds or custom resource definitions i'm saying yes and then here's what i was just talking about ingress class so our ingress class is going to be traffic external so what does that mean well as i mentioned earlier i'm setting up an ingress route rather than a normal ingress not too important but the name of that will be traffic external now you can name this anything you want you could name it just traffic but i like to name this according to what i'm going to expose it to for this i'm calling it traffic external and it's just a way for me to know that i'm going to expose this externally but really it can be named anything you like next we have the option to expose the ingress as well and i enable that and then whether or not you want to enable the kubernetes ingress too and then publish service enabled true i think this actually needs to be false i was testing something earlier and it needs to be false which is the default value so this is not something we need it's pretty advanced too next is our back enabled and that's role based access control and we want to enable that so we can use our traffic service account to configure our ingress within our cluster and do some other things so next is the actual service so we want this enabled obviously and then we'll enable the type of load balancer which means that when traffic stands up it'll actually be a load balancer and we'll get an ip address from metal lb which you'll see here in a little bit which is right here and this should be an ip address in the range of your metal lb ip range so if you specified a range just pick one ip address in that range if you only specified one that's going to be your one so this is our traffic config pretty small pretty easy so let's actually go back and install this so we're saying helm install install name space traffic a release name of traffic from a repo traffic traffic and the values were pointing at the values.yaml that i just explained so let's run that and so this now should be installing the service it's actually already deployed and if we want to check we can say cubecontrol getservice all name spaces o wide so we're saying give me every single service in all name spaces in our kubernetes cluster if we run that we can see we don't have many but we can see traffic right here which is a good sign and it is of type load balancer like we wanted and then we see our cluster ip and then we see our external ip so that's our middle of vip address so this is a good sign traffic is already up and running and we have an external ip address from metal lb so something i like to do is create a middleware so a middleware is really just a block of configuration that will run whenever you wanted to on some routes and so this middleware right here i'm actually creating a group of default headers some of the services i run require some additional headers and so these are the headers that i found that a lot of my services need so i'm creating a middleware that we can apply to a route that you'll see later on that will apply these headers to that route so let's actually just create this middleware we won't use it yet but let's create it so we have it so to apply that we would say coupe control apply dash f for file default headers.yaml so we applied that and we created it and if we do a quick kubecontrol getmiddleware we should see we have our middleware there default headers that's a good sign next we can expose the traffic dashboard to see our routes as they get created now it's read-only so you can't modify anything but it's nice to look at to troubleshoot or to see what routes you have without going through all of your manifests or even using a coupe control command so in order to do that we need to generate a basic auth credential and to create a basic auth credential we're going to need something called ht password that comes from apache utilities so we need to install apache 2 utils really quick so we can get ht password and then we're going to create our basic auth username and password so the way that we do this is run ht password and then we use a username and then the password and then we pipe it to open ssl base64 so we're going to base64 encode this credential so we can create a secret to apply to our dashboard so this should output a base64 credential and you can see that right here so this is saying when we go to our dashboard the username is techno and the password is password you should probably use something a lot more secure than that so let's actually create a secret now to apply to kubernetes so we can tie that secret to our dashboard so when we sign in we can use this credential it all makes sense here in a minute so we're going to create a kubernetes secret you can see right here it's a secret i'm going to name this secret traffic dashboard auth so i'm going to put it in the namespace of traffic and this needs to be in the same name space as our service and then the data users key is our base 64 encoded secret and you can see that matches this right here so let's apply this secret really quick so we'll do a coup control apply dash f for file and secret dash dashboard.yaml which is our file with our secret in it we'll apply it and so now it created it so now just to check we could say coop control get secrets in the namespace of traffic and we should see our secret here traffic dash dashboard dash auth now you see a helm one in there too that's because helm actually create secrets for config don't worry about that one the one we just created is actually right here okay so now that we have that we can actually deploy our dashboard you're probably asking what are you talking about didn't we already deploy our dashboard kind of so the dashboard is actually enabled in traffic but we need to create that ingress to actually get to it so we need to be able to route traffic into that dashboard and here's how we do it i mentioned i'm going to use an ingress route you can see here the kind is actually ingress route i'm naming this traffic dash dashboard it's in the namespace traffic and here's our annotation right here for our ingress class i'm calling this traffic dash external and you'll name that whatever you named your ingress class earlier then we're saying the entry point is web secure and then for our rule we're saying match this host name right here so the host name i have set up on my local network is traffic.local.technotim.live so you'll have to have a dns entry somewhere in your environment that points to this ipa address right here that we configured for your load balancer ip so create a dns entry that points to this load balancer ip and then that dns entry should be put right here so when a client goes to request it it'll be sent to that load balancer ip that load balancer ip will send it inside of your kubernetes cluster to traffic traffic will look at the header for the host name that was requested and then it's going to route it to api internal traffic service so that might not make a ton of sense but that's how traffic does it they have a service called api at internal and it's of kind traffic service and one more thing we probably need to do as you see here called middleware we didn't actually create this middleware and we should but this is actually traffic dashboard basic auth so we never created the middleware to apply to our ingress so here's the middleware we need to create so we're saying traffic dashboard basic auth in the namespace of traffic and we're saying that basic auth secret is the secret that we created earlier so this is going to tie our secret to our ingress we're applying middleware in the middle so first let's apply our middleware with coop control apply dash f middleware and then it's created and then we'll apply our ingress which is kube control apply dash f ingress.yaml so now we created our ingress so now we should have our dashboard up and running so if we go to that dns name we created traffic.local.technotim.live we can see we're getting a connection is not private warning now this is actually a good thing at least right now this is saying right now that hey traffic is serving out this page over https and if we look at our certificate right now it's saying it's from traffic so good sign later on we'll replace this with a let's encrypt certificate that is trusted but for now this is a good sign so let's say advanced and then we'll proceed then we should get a prompt here this is another good sign so i said it was techno and then i said it was password if we sign in that's bright let's go to dart mode we can see our traffic dashboard so really cool we have our dashboard up now we don't have any routes or anything right now we do have metrics installed for prometheus and we have traffic actually exposed and then we have web and web secure exposed what we saw earlier but we don't have any other routes so once we create a service we'll see routes here later on but this is a good sign up until now we're doing great so what we've done in the past with traffic was actually have traffic configured with let's encrypt to go out and fetch our certificates and then store them in local storage which is fine it works and it's not wrong it's just different but the challenge there is that you can't really run a true aj traffic you can only run one instance one instance is only able to read and write to that certificate store that's on a persistent volume claim which is on disk and only one pod can access that disk at a time this is where cert manager comes in so cert manager can actually be our issuer and it can also fetch and retrieve certificates from say let's encrypt and then it will store those certificates as secrets within kubernetes and you can have as many pods as you want that can access those secrets so that frees us from binding to disk and it allows us to pull secrets from serp manager as our certificates and then we can scale traffic as wide as we want because they can mount secrets on all of those pods so let's get cert manager installed really quick and let me tell you this is probably the easiest way i've ever managed certificates i've managed certificates on windows and linux and mac and it is super duper hard cert manager in kubernetes makes it super easy so first let's create our name space for cert manager with kube control create namespace cert manager then we'll do the usual get namespaces to make sure we can see our namespace we just created and we can see it right there next what we're going to do is apply some crds or some custom resource definitions now this is a very big topic but at a high level what custom resource definitions allow you to do is define objects in kubernetes that kubernetes doesn't know about natively for instance let's say i did a coupe control get certificate so kubernetes is going to say hey i don't have the resource type of certificate it's because natively kubernetes doesn't know what it is but we can apply this custom resource definition that will define resources so that you can use these commands to do these things and it's not just me doing these commands it's so all of these services know how to interact with these objects so long story short we're going to apply this custom resource definition from cert manager that should apply it and now if we do the simple hey get a certificate now look it's not giving us an error it's just saying hey no resources found in default namespace so it knows what i'm talking about but we don't have any certificates there so that's kind of crds in a nutshell so next we would run our helm install cert manager jet stack cert manager namespace cert manager and pass in a values argument of our values file and then dash version of the current latest version of cert manager so what does our values follow look like so our values file is really small looks just like this so i'm saying hey install crds false because we just installed them i don't want helm to install them i'll manage them outside of there and it's a lot easier if you just apply them ahead of time than to let the helm package manager install them so and then we can specify our replica count but remember i said i wanted it h a so we need at least two we can go three then we need to set some extra arguments now these are really important just tripped me up for a while so these extra arguments are saying hey for the dns-01 challenge here are your name servers and your name servers are going to be cloudflare and it's going to be quad 9 and then we're saying recursive name servers only so why do we need this i'm not going to explain why we need that until i explain the next couple because it'll all make sense next we're saying these pods we have a dns policy of none we're not going to use dns from the host machine and so the pods aren't going to inherit dns from the host machine and then we're passing in so we're hard coding a dns configuration to these pods of one one one one and nine nine nine nine again cloud flare and quad nine so now why are we doing this well so when we need to get certificates we're going to do a dns 01 challenge which means we're actually going to verify that we own a domain and so let's encrypt is going to create a text record in cloudflare in our case that we can read and verify that we own this domain and that all works great the problem is if you use your own local dns here and you have dns say for this dns name we're going to validate it will actually return back the ip address of the name on your local network which is not a good thing because then it doesn't know how to read that text record in the cloud in global dns so the way to get around this is to always just hard code your name servers on cert manager for the pod and to override any dns that these pods are going to use and this will ensure that it's always going to look at public dns rather than your internal dns and this one especially tripped me up for a while this week it was the pod dns config that i didn't know i needed until i needed it so anyways all this is doing is telling the pods hey you know what you're not going to use local dns you're going to go straight to cloudflare and you're going to go straight to cloud9 for dns which is fine because that's the only responsibility of these pods really as far as dns is concerned outside of the cluster is to validate dns for us so i think we can install this now so let's install this and so now it's going out to our own cluster and it's going to install cert manager this one might take a minute or two depending on your resources in your cluster and then we can run kube control get pods namespace of cert manager to see all of our pods that are running so this looks good we have our three cert manager pods then we have our ca injector pod and then we have our cert manager web hook pod running and they're all running no restarts up for 61 seconds this is fantastic so now that cert manager is running we need to set up our issuer so as i mentioned before we want to get certificates from let's encrypt so we want a new issuer and our cluster issuer is going to be let's encrypt and our cluster issuer config looks like this so we have a cluster issuer of let's encrypt staging now let's talk about this really quick so with let's encrypt you have a staging endpoint which you can get certificates from that aren't globally trusted but do allow you to test their api and then they have the production endpoints which is globally trusted but has some rate limiting applied to it i highly recommend you get this working 100 repeatable in staging before going to production as i mentioned before they rate limit production and if you exceed the failure rate that they have set you'll lock yourself out for up to a week so i'd highly recommend getting this all working in staging before going to production you've been warned so we're going to set our server to the staging endpoint and this is going to be your email address for cloudflare now we haven't created this secret yet but we will create it and this is going to be pointing to your cloudflare token then we're saying our solver so how are we going to solve or what challenge type are we going to use we're going to use the dns01 challenge like i mentioned earlier of cloudflare and then we're setting our email address to our cloudflare email and then we're setting an api token secret ref to our cloudflare token secret which we'll create at the keyname cloudflare token and then we're using a selector to say it's on this zone in cloudflare so that was a lot that was a lot hopefully that makes sense what it boils down to is we're just configuring a solver of dns01 type challenge we're gonna pass this issuer some credentials or a token from cloudflare we're saying it's from this zone technotim.live and we're gonna reference a bunch of secrets that we're about to create so we don't want to create this yet we actually want to create our secrets so our secret is what we just talked about so this secret is a cloudflare token secret in the namespace of cert manager and then some string data of cloudflare dash token and then our token this is a token you will get from cloudflare in your dashboard and that token should have read and write access to the domain or dns that we're referencing here so that it can create that text record and then delete that text record so get your token paste it here so we can apply this secret token and create it then we can create our issuer for let's encrypt staging and we just created it and now we can create a certificate so remember when we did a coupe control get resource earlier of type certificate this is actually what we're creating so this is really awesome we can define our certificates in yaml and have cert manager go out and fetch them and then we can apply them to traffic later on so we're going to create this certificate i'm just calling this local dash techno tim dash live it can be whatever you want it to be named you could name it your domain if you want then i'm putting it in the name space of default you might be asking why i'm putting it in the namespace of default well i found that this certificate needs to live in the same namespace of the service you're going to use where your ingress lives so i'm going to deploy a service in the namespace of default so i'm setting this to default that'll save you some pain later on and then i'm referencing this secret name so this secret is going to be named local dash techno tim dash live dash staging tls that's the secret name within kubernetes that we'll use to put in our ingress later on and then we're saying the issuer ref is let's encrypt staging what we just created the kind is cluster issuer and then the common name and dns names so i'm going to create a wildcard certificate and this is totally up to you you can create a wildcard certificate or you can create one certificate for your services i'm going to create a wildcard because it's a little bit easier and it's pretty cool but i'm creating one for the subdomain local of technotim.live so this domain does not exist publicly only technotim.live exists publicly i'm creating an internal certificate that i can use on all of my services and i'm adding the subdomain.local so no one can resolve it anywhere and so that if i use it internally it doesn't match the external name supergeeky i know but you can change this to anything you want it could be example.technotim.live and then you would do the same thing down here for this dns name and get rid of that and that would create one certificate on the sub domain of technotin.live and if you didn't want a subdomain you could do a top level domain like this and just have one certificate or you can create a wild card at that level too you have so many options but what i'm going to do is exactly what i showed before which i'm creating a wild card certificate or my sub domain totally up to you so now we can create this certificate by applying this yamo and it's being created so how do we know it's being created well let's tail the logs so i'm going to say coupe control logs in the name space of cert manager dash f for follow and i want to follow one of these pods i don't know which pot it's going to be not that pod not that pod is that pod so right here i can see that cert manager is trying to propagate that dns name right now so what this is doing right now is creating a text record in cloudflare that has a secret or a key that it's looking for it's going to verify that key and then it's going to issue a certificate to me so this could take some time sometimes it takes i don't know three four five minutes but it can vary so you'll just have to wait until this is done and you'll know it's done because it'll stop cueing and looking for this and it'll stop throwing an error it's not really an error i mean it is an error because it can't find this dns record but nothing is wrong it's normal and another thing we can do is coupe control get challenges so we can do this too and we could tail this if we want but we could see right here that we have two challenges that went out one for local and then one for the wild card the one for local already passed and the one for the wild card is still pending so we can do this over and over and over or we can just wait and then eventually if you say get challenges and you don't see any challenges that most likely means that you have a certificate so now that we generate that certificate we actually want to apply this now to a service so let's create a really quick and easy service of nginx and apply that certificate to it so i created this really quick and easy nginx deployment it's just an nginx deployments replica of one let's just change that to three for fun and then some typical settings then we'll create a service for this and then we'll create an ingress and so this ingress is just a typical ingress we'll name the ingress we'll put it in the default namespace and here we're going to specify our ingress classes traffic external we're seeing the entry point as web secure and i'm going to set a couple of rule matching for routing so i'm saying hey either www.nginx.local.technotim.life or nginx.local.technotim.live we're going to route to port 80 on that nginx container we're going to use our middleware which we talked about earlier of default headers and here's where the magic is right here we're actually going to uncomment that so this is how easy it is to apply certificates to these routes we're saying hey on spec use a tls key and the secret name of local dash technotem dash live dash staging dash tls and the secret if you remember if we go into certificates and we look right here it should match this secret name right here and it does so this means that this route is going to apply this secret which is a certificate for tls so let's just apply this whole thing so a quick shortcut we can say kubectl apply dash f and this whole folder of nginx now it deployed our deployment deployed our ingress route and deployed our service so now let's check to see if we have a let's encrypt staging on this service oh and also remember for the service if you're doing the same thing you'll want to make sure you have a dns entry pointing to this right here so this route needs to be somewhere on your network which then routes to metal lb which then goes to traffic which then gets routed to this container so once we go to nginx.local.technotim.live we're getting the same security warning but if we look right here and we look at our certificate we're actually using our wildcard certificate and look where it's from it's from staging so this means it's using our let's encrypt staging certificate and serving this out through that which is a good sign now i know that this isn't trusted but it does mean that traffic is serving out this traffic using a certificate from cert manager which is awesome so how do we go to production pretty easy so in order to go to production we just need to create a couple more things first we need to create a new issuer so we need to create a let's encrypt production issuer now this looks exactly like the staging issuer except for a few differences one we're just going to call it let's encrypt dash production that's so we know it's the production issuer then we're going to actually point at let's encrypt production endpoint you notice there's no staging in there now you're going to use the same email and even the same secrets too that's because you're using the same secrets for your provider you don't have a lower environment of cloudflare we use cloudflare production for both of these so those don't change but we'll just want to make sure that our name is actually specified as let's encrypt production for this issuer so we know how to refer to it in our certificates so our solvers don't change and even our tokens don't change and our dns doesn't change either so pretty simple here then we need to actually create a production certificate so this looks exactly the same too with the exception that i actually took out the staging in the name so this is called local techno tim live and local techno tim dash live dash tls and everything else except for this right here which i almost missed looks the same so the issuer reference is let's encrypt dash production so first let's apply our let's encrypt production issuer so we can create that so we're creating our issuer now for production that points to let's encrypt production then we want to apply the production version of our certificate which is the local-technotim.live.yeah so now we just created our production certificate now it should be creating that text record verifying that text record and then creating a certificate for us so let's get our challenges and we can see we have one that's pending and we can see that the certificate was approved and now it's verifying dns so it's actually checking for that record over and over and over and now we can see our challenge is valid our next one is pending doing the same thing looking for that tax record again and eventually we should see that there aren't any challenges left so this is a good sign so this means we should have our certificate now and if we do a coupe control get certificates we can see our certificate is there and it's ready is true so this is a good sign so far so the last thing we need to do is actually update our ingress to now use this new certificate secret let's do that so if we go back to nginx we should see that our secret name was local techno tim dash live staging dot tls now let's replace this with the production version which is right here which as you can see i leave production off usually i use a naked version of it without staging uh or a lower environment if it is production so let's replace this and let's supply this so let's do a kube control apply dash f ingress we applied it let's just do a kube ctl describe ingress describe our one ingress we can see here here's our nginx one and we can see our tls secret name is our naked one without staging on it now remember i said your certificates could be cash so at this point i would probably use a new browser one you haven't used yet to visit this because you don't want to see the cached ones from staging and try to figure out what's going on so new private window in firefox go into https nginx.local.technotim.live yes this worked first try you could see our nginx welcome page and if we look at our certificate more information view certificate we can see the issuer is right all right wait for it wait for it wait for it wait for it let's encrypt right here so let's encrypt let's encrypt they're the issuer and it is trusted i didn't need to do anything because it's publicly trusted so this is really awesome so we've accomplished a ton here from a basic kubernetes cluster we installed traffic and got our ingus controller configured and working then we set up our traffic dashboard and got that working then we set up cert manager and set up let's encrypt along with our dns to issue certificates that we can use on our ingresses as you can see here and we set that up for staging and production so we know it works congratulations you now have certificates for your kubernetes cluster so i hope you learned something today and remember if you found anything in this video helpful don't forget to like and subscribe thanks for watching at oh whoa that's like a mirror you can actually see my camera pointing down at it but look at that oh my gosh uh i think you can see that presented to techno tim for passing a hundred thousand subscribers this is actually super nice this is way nicer than i thought it was gonna be like i thought i didn't think there was gonna be any like depth here and i didn't think it was going to be a mirror so this is like and it's it's like heavy it's like it's like really metal i think but man this is uh yeah this is because of all of you
Info
Channel: Techno Tim
Views: 57,680
Rating: undefined out of 5
Keywords: traefik, cert-manager, cert manager, k3s, kubernetes, letsencrypt, ingress, ingress route, kubernetes ingress, k8s, let's encrypt, cloudflare, dns, dns-01, challenge, cluster issuer, free certificates, ssl, tls, tutorial, homelab, home lab, helm, namespace, middleware, wildcard certificates, dashboard, self-signed, certs, production, how to install traefik on kubernetes, how to install cert-manager, load balancer, reverse proxy, router, https, http, X.509, jetstack, crd, ingressroute, ha, high availability, prod
Id: G4CmbYL9UPg
Channel Id: undefined
Length: 37min 35sec (2255 seconds)
Published: Sat Aug 06 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.