EKS Fargate vs. GKE Autopilot - Fully Managed Kubernetes Clusters Compared

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Even though I like AWS and their great volume of regions/zones, GCP's GKE is better and cost less.

👍︎︎ 1 👤︎︎ u/InvestingNerd2020 📅︎︎ Mar 25 2021 🗫︎ replies
Captions
which one is better what are the differences between the two and which one you should be using [Applause] today we are going to explore and compare two different services on one hand it will be aws cts with far gate and on the other gk autopilot from google cloud now why am i comparing those two services and the reason is relatively simple those are the only fully managed kubernetes services or at least that's what they claim now before we proceed let me explain in very easy terms what does it mean to be fully managed kubernetes service the explanation is relatively simple it means that we do not manage any aspect of our kubernetes cluster everything is in the hands of a cloud provider they are responsible making sure that we have sufficient number of nodes that it's always up and running and all those things we're supposed to select the name of the cluster and the region where it should run everything else should be in hands of a cloud provider cloud providers should make sure that it is always up and running that it is highly available that it has sufficient number of nodes that the size of the nodes is what we happen to need that there is networking that there is storage everything so we select the name of the cluster and the region and everything else happening around our cluster is automatically appearing it's just happening we are supposed to be responsible for our applications running inside of those clusters and not what those clusters are what is below them what is around them what they are and none of those things i want somebody else to manage my cluster fully no exceptions and the only job i should have is to apply resources inside of those clusters we are going to compare aws ets with fargate and gk autopilot i already created a video about gk autopilot the link should be somewhere over there check it out if you're not familiar with it i did not create a separate video about eks speedfargate for a simple reason it's been around for a while now however if you feel like you would benefit from me creating a video dedicated to eks with target just let me know i'll do it so the questions i want to answer here is which one is better what are the differences between the two and which one you should be using we are going to figure that out by comparing the differences and similarities between the two so let's move on into the practical part we are going to explore both of them in parallel within 20 minutes or less we are going to use left terminal for fargate and write for gke autopilot let's start by creating a cluster and for eks far gate we're going to use zks cuttle for a simple reason because uh doing it manually requires many different steps and with terraform it's less convenient within eks capital still it should be the same no matter whether you use one or the other it doesn't really matter we are about to create a cluster in whatever is the easiest way because that's not really the subject of this uh discussion ekscattle config file we're going to use is this one and basically we're defining a cluster config for eks couple with the name the region uss2 and the version of kubernetes we want to use one thing you will notice is that we are not defining the node pool we are not defining the nodes of our cluster only the name and the region and one more important thing and that's forget profile so unlike gke autopilot farget is using the normal eks but then we create a profile that tells the system which namespaces should be used with fargate so we cannot create a fully managed kubernetes cluster we can just tell fargate to use segments some parts of the cluster to be fully managed and we are defining what is managed and what is not by selecting the namespaces that should be managed with fargate now of course it could be a full cluster but then we would need to select all the current and the future namespaces we want to use in this case we are defining two profiles default that is based on the default cube system and argo cd namespaces and then the dev profile which will be applied to the dev namespace only if it has labels env equals dev and checks past right whatever the labels are and then to create the cluster we execute tqs catal create cluster and then config file will be fargate dot yam now this cluster is being created now let's turn our attention to gk autopilot right unlike fargate that doesn't have a single command that we can use to create full cluster with target profile whatsoever at least not through aws cli we need to go to third party tools like eks cuttle gcloud has the fully managed version of its clustered autopilot baked into its cli so the command is gcloud container clusters create auto that's a sub command for creating autopilot cluster and we're going to call it devops toolkit and the region will be usc 1 let's say and finally we need to specify the project which will be project id whatever i have in that variable now it is creating a cluster this will take a while uh before we fast forward let me tell you that all the commands that i'm running both those that i executed before the session uh to prepare everything i needed and during this demo is in gist and gst is in the description and now fast forwarding to the end of the process let's see who is going to be faster how to do [Music] oh it's finally finished okay uh that was disappointing from aws side uh gk autopilot was created fast i don't know the exact timing uh i will know that in post-production but anyways jk finished it in probably five minutes or something like that while uh aws took probably like 25 minutes or more longer i'm not sure 20 minutes probably give or take right so eks with target is much way slower than uh jk without a pilot but that should not matter much right you're probably not creating clusters every day and then you're bored to that waiting for that to finish later on we will see whether aws keeps being slower when we start uh deploying pods we'll get to there let's let's discuss briefly whether what we saw so far is really fully managed by providers or not in case of google we just executed a command and google took care of everything and we cannot see the nodes we cannot manipulate the nodes or networking or this or that all of those things are managed by google if you go to cloud formation and see what eqs coupler created we can see that those are all the resources that it created for us so eks cattle gives us an illusion of being fully managed by aws which is not really true we would need to create all those things ourselves if we wouldn't have a helper tool like he cares cuddles so if you're an ekscattle user then you can say hey this is fully managed by aws because i do not see those things i do not need to create those things but in reality they're still there they're managed by you is just that cli because cattle cli is hiding that from you which is okay as a user you can say hey i do not care about those things i will never touch those things and then it's somehow fully managed but not really but let's say for now that yes it is fully managed you will not see the nodes at least not directly from simplicity perspective it really depends whether you're using eq scuttle or something else if you're using terraform if you're going through aws cli or something else you would need to create a lot of things it's not simple now indicates catholic if you're interested user then it is as simple let's say give or take as uh google cloud except that we have to create that target profile which has its advantages and disadvantages as well but we are going to comment on the profile later probably around the end of the video now let's see what we got from the node perspective so i'm going to execute cube cattle get nodes in both terminals cube cattle get nodes and we can see that in both cases we got some initial nodes uh those are the nodes that run system level processes we are not paying for those nodes uh in either case we are paying only for what our pods consume and of course we are paying for the control plan as well but that's a separate subject i guess now what happens if you want to deploy something like cube cuddle apply dash dash file name deployment yaml i have some simple demo application here and i'll do the same thing in google cloud let's see what now happens with the pods keep cut will get pods and nodes let's say and cubecut will get pods and nodes and we can see that both are pending and the reason why the pots that we created are pending is because in both cases the provider needs to create additional nodes to handle that so we're going to wait until it creates those nodes and then i will explain how it actually really really works for now let's see who will be faster to create the notes [Music] we can see that gk already created additional node here it is soon one of the pods will be running and soon after that aws also created additional nodes now i must stress we're not paying for those nodes we're basically paying for the resources that our applications are consuming our pods but soon both of them will be running [Music] it took more or less the same time for them to create additional capacity for our nodes so both of them are expanding the cluster to accept new workloads and will be contracting the cluster when workloads are gone from the user perspective they seem to be working more or less in the same way both of them are creating additional nodes when we deploy additional pods and both of them are destroying those nodes when we remove the pods so they're contracting and expanding depending on the workload we are having and all that is happening without our intervention that's how they manage their clusters so we just need to deploy stuff and aws and google will handle the the infrastructure for that so that's absolutely awesome from user perspective both of them work the same now technically they are different aws on one hand is creating new nodes and it is assigning pots to those nodes so it is basically saying hey you want to run a pod excellent i'm going to create this node and i'm going to tell that pod to run inside of that node i will assign a pod to a specific node gke autopilot on the other hand is using node auto provisioning or nap combined with cluster auto scalers so google is not assigning our pods to any specific node instead they're using cluster auto scaler to expand the cluster so that there is sufficient capacity in that cluster and then kubernetes scheduler runs pods in those additional nodes simply because it detected that there is additional capacity now i finally have space inside of my cluster to run those pods from user perspective the effect is almost the same but technically google is much closer to how kubernetes should be working it is leveraging kubernetes through its cluster auto scaler and few additional things while aws is assigning pods to specific nodes which might not seem like a big difference but i suspect that on the long run google strategy is better and it will allow it to keep up with the advancements in kubernetes while aws is doing something very proprietary it is creating its own changes to the scheduler so that scheduler assigns spots to specific nodes instead of simply wherever there is available capacity but as i said before from user perspective they are more or less the same those are the details running in a background that probably do not matter for majority of people if our clusters are expanding and contracting to accept new nodes the question is really what happens with demon sets and other let's say less commonly used resource types democrat for example is a kubernetes research type that is designed to run on every single node we use it often for specific purposes like for example collecting logs yes if you want to collect logs from all the pods in our cluster we need to have a demon set that will do that that will run in every single node and collect logs from different pods on that node and ship it somewhere right now there are many other usages of demo set but i'm i use logging is as a simple example now let's see how do they behave if we want to deploy a demo set are they fully fully managed and typical kubernetes cluster or they have restrictions uh like hey you can run this but you cannot run that let's explore that part i will apply a demonstrate definition that i prepared to apply the file name demon set and i will do the same thing in google cloud cube cattle applied file name and demon set now let's watch the pods watch cube cattle get pods and notes and see what's happening in both cases watch cube cattle get pods and nodes in case of aws simply demon sets are not supported at all so you cannot run a demon set and you probably cannot run a bunch of other things you need to be limited to deployment and stateful sets usually uh so aws does not allow you demo sets you can see that there are no pods whatsoever and what's or not google on the other hand claims that hey one of the big differences between us and aws is that we you can run anything you can run among other things demo sets but look at my screen look at this thing theoretically you could run it but only if you're lucky if there is available capacity on the nodes you can see here that google does allow demo sets and there are four pods because there are four nodes in this cluster right now but none of those pots is running they're all pending and that might be even worse right aws cannot run demo sets and it does not allow us to run demo sets period in case of google you can run demo sets at least you're allowed to run them but they do not work or at least they do not work always it it really depends on whether you are lucky or no if the node that runs some other pods it has available capacity or no in my case right now here there is no available capacity so all my demon set pods are in the pending state so the claim from google that you can run anything you want in your cluster is not really true it's more like hey you can try to run anything you want but you might not be successful with things that span multiple nodes like demon sets so while i was more in favor of google for creating the cluster because it is really fully managed i do not need to do anything except to specify the name of the cluster in the region in case of aws that was much more complex that complexity is obfuscated made invisible tricky as cattle so aws is not really fully managed is more giving a perception of being fully managed and it is indeed managing our nodes actually we we're going to get to that later because even that is not truly fully true but anyways on the other hand what i like about the lvs is that it is more honest about what you can and cannot do hey you cannot run democrats period and you know how to deal with it google is allowing us to run things that realistically shouldn't be running in this type of cluster like you can see here for demo set and there are a bunch of other examples so aws is better off with preventing us from doing things that we shouldn't be doing at least when pods are concerned now let's go back to the big screen and discuss a bit more about the differences with autopilot google created a special type of cluster that is truly fully completely managed by google fargate on the other hand is almost fully managed and it's almost fully managed only if you create the forget profile that encompasses all the namespaces and that's hard to do because namespaces are being created all the time if i would like to let's say deploy promote use right now inside of my aws cluster i would need to change the profile first to add that namespace if i would like prometheus to be in a fully managed part of the cluster but you need to make the decision which parts of your cluster will have nodes that are managed by aws and which parts of the cluster will have nodes managed by you unless you list absolutely all the namespaces in the fargate profile that however has advantages and disadvantages on one hand if you want to fully manage kubernetes cluster completely fully no exceptions then google cloud is a better choice because it is a fully managed entire cluster however you might have use cases where you want aws to manage parts of your cluster now i'm still confused what would be those use cases i haven't found them yet but if you want aws to manage part of your cluster and then you manage another part of your cluster for whatever reason that might be then fargate vdks is arguably better option actually it's not that it's better option is the only solution that allows you to have partially fully managed cluster you still need to manage the resources around that cluster and everything that that means except the nodes for the pods running in namespaces that are part of the target profile if you create a cluster ridiculous cattle then all those things are mostly abstracted from you except the profile profile is just to me silly but unless you mix and match and then it makes sense if you don't mix and match forget profile doesn't really make much sense if you want to fully manage kubernetes cluster nevertheless if you ignore those details which you might say hey they are important or they are not but if you ignore those details we can say that somehow both fargate and autopilot are fully managed up to a level let's say that aws fargate is almost fully managed or getting there while gta autopilot is really fully managed there is no need for you to create any additional resources there is no need to specify which namespaces are fully managed and so on and so forth now there are many similarities like you cannot ssh into those nodes in either of the solutions you cannot change uh kernel parameters in either of the cases you cannot use demon setting in both of the cases except that google allows you to create demo sets but they will not always work so for now i will say demo sets are out of the question in both cases at least until google figures out how to calculate better the combination of my pods plus demo sets and so google might be getting there in the future but right now i would not recommend anybody running demo sets in google autopilot so from that perspective both of them are the same give or take in case of google you need to choose hey is this fully managed cluster or is it not fully managed cluster while in case of aws you get the cluster that is both fully managed or more or less fully managed and not fully managed depending on the namespaces you choose to include into the fargate profile behind the scenes what probably doesn't matter for much how they work is very different aws is going to create nodes and then assign bots to those nodes while google will create additional nodes as a result of cluster auto scalar detecting that there is no sufficient capacity and google will not assign pods to specific nodes but simply kubernetes scheduler will do that because it will detect hey there is a pending pod there is a new node that has available capacity why wouldn't i run a pod over there so from scaling perspective google is closer to how kubernetes is designed to scale nevertheless as a user you might not care about those things those are more technical details in the background right finally both of them are easy to set up but only if in case of aws you use yes cuddle if you're something else then uh things become more complicated with aws even with dks cutler you still need to create that target profile which makes sense only if you want to mix and match fully managed and not fully managed if you wanted it to be fully fully managed then eks profile is kind of silly but hey so which one is better which one should you use i am slightly more inclined towards google autopilot or gk autopilot but the differences are not significant enough for you to change your provider you should make a decision do you want the fully managed or more or less man fully managed kubernetes cluster and if you do you will probably not find enough differences to change your provider if you're already running in google you'll probably use autopilot if you're running in aws you will use far gate you will not change you'll not switch because the differences are not as significant as they should be in order for you to make the investment to switch from one to another but if you're running somewhere else like azure or alibaba or what's or not if you're using one of the providers that does not have fully managed kubernetes cluster and you do want to fully manage kubernetes cluster then your only option is really to switch to some other place and that some other place would be aws or google and if you're indifferent which provider you use then i would say that there is a tiny tiny tiny small advantage for autopilot and if you're indifferent where you go go to google i must conclude the video with hey both of those are fine both of those are doing very similar things there is a slight preference towards google but not significant to make you change and i don't like making videos like that i like making videos where i say hey use this do not use that but i cannot do that in this case both of them are similar let's say that's it thank you for watching remember to subscribe hit the like button do all the stuff and one more thing before you leave keep sending me your requests in the comments most of the videos i'm creating for a while now a result of you recommending the topic that i should explore this is one of those so keep telling me what i should explore next i cannot guarantee that i will do all of them within a week but time going through your comments and i'm choosing subjects based on what you recommend me to explore thank you so much for watching see you next time cheers
Info
Channel: DevOps Toolkit by Viktor Farcic
Views: 2,384
Rating: 5 out of 5
Keywords: eks fargate vs gke autopilot, eks, fargate, aws, google cloud, gcp, google cloud platform, gke, autopilot, eks fargate, gke autopilot, eks and fargate, kubernetes, k8s, fully managed kubernetes service, kubernetes service, viktor farcic, devops, devops toolkit, eks vs gke, fargate vs autopilot, aws fargate vs google autopilot, amazon eks, aws fargate, eks adn fargate, google cloud vs aws, gcp vs aws, eks fargate demo, gke autopilot vs fargate, devops tools
Id: -59KDnNrIfc
Channel Id: undefined
Length: 24min 52sec (1492 seconds)
Published: Tue Mar 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.