Explained : Serverless vs Serverful Backends ⚡

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
you have probably heard that serverless is the most exciting part these days about back-end development and how you don't have to manage servers and how everything gets deployed on the edge and everything is just working magically on the other hand you see that server full or you know which is probably a wrong word which is actually using servers is frowned upon or sometimes it is the necessary evil you have to do it and it's not usually scalable and what not in this video let's just do a deep dive into what is the fundamental difference between server full which is like using servers versus a serverless model let's understand it very deeply not on the surface not the definitions part by actually using practical ideas and how it works practically let's go if you're new here make sure you leave a like subscribe to the channel and hit the bell icon this is free of cost and helps the channel grow let's try to understand these compute models basically what we are actually doing is computing right computing is basically passing a program this is like a computer you pass in a program you get the output out and you do something with this output right so what we want is we want this compute machine to be rented out or to be given to us because we cannot run this computer 24 7. these servers computers are much better than your local systems so what we want to do essentially is when you want to compute there are basically three ways the first ways is that you own your computer or you own your service and the second way basically is using a cloud service provider in a cloud service provider what you can do is you can use a managed service which is something like s3 like i would say s3 is a managed service the second one is serverless which is something like lambda for example i'm making use of aws as the examples and the third one is like server full which is a weird way of saying that you are actually using servers right where you also have to manage the server so what we are discussing in this video is these two services over here now let's go on a journey let's understand what exactly is the difference between a serverless and a server full api let's say you are deploying some sort of thing on an ec2 instance right let's say this is an api which just returns you a string hello world right this is the string which this api returns you whenever you call this you know and this obviously would have a url as an api right where you would call it now when we talk about server full architecture over here let's say you created an ec2 instance first right you're talking about the server full part when you create the cc2 instance you get an ip address this ip address let's say this is 1.2.3.4 right now this ip address is basically you can think about this in a way is that aws has these racks of servers and maybe in some one server which is let's say this is an independent server this is also running some sort of virtualized you know hardware and so on but for the sake of simplicity let's say this is the server which is assigned to you and this is somewhere sitting with a 1.2.3.4 ip address and let's say u.s east one zone and that's it this is basically a machine which they have given to you this is a machine this is an actual real computer just like you would have your own computer so you can do pretty much anything you want on this computer it's running 24 7 you have got an entry point for this in the form of an ip address and you are all good to go you can access this api by visiting 1.2.3.4 api it will return you this hello world string if the server crashes you have to restart it why because this is a computer if the server goes down you have to reboot it automatic you know yourself why because this is a non-managed server full instance which is just sitting out there as a rack so even to create and deploy this api what you have to do is you have to ssh into the server from your computer you are sitting over here you ssh into a computer somewhere on the other side of the world you get access to this computer you get into the cli and you actually run commands which you would do when you're deploying it on your own local computer just like you were you know doing npm start or npm run on your local systems you do the same thing on these ec2 machines because in a nutshell these are just like your computers but sitting in the cloud on the other hand when we talk about serverless for example these services what they usually require you to do is upload a bunch of code upload a bunch of code which should be executed on a predefined endpoint and this endpoint usually will never have an ip address so they will give you a host name like us ease dot api dot amazon dot dot dot whatever it is now you would have the option to use custom domains and everything fancy with it but at the end of the day you are getting an abstracted solution it's not a pure solution because in case of ec2 like we saw that this ip address is associated with this server which is sitting over here somewhere in the compute rack right in this server rack this compute instance which you have you get the actual ip address you can do anything you want you have the linux access you can pretty much install any software do whatever you want to do this serverless architecture which you get here this which is the lambda architecture this when deployed gives you a weird url now the reason they give you a url instead of an ip address is because serverless by definition means that they do not bind your code to a specific server right so you see this ip address once it is given i mean there are things like anycast ip addresses and so on but let's for the sake of simplicity let's assume that once this ip address is given this is bound to go to the same machine all the time right but in case of this over here this thing over here which is the lambda architecture what can happen is that maybe sometimes whenever you request for this sometimes this container gets the ip address maybe this is like three four five six and sometime maybe some other container like this gets access which could be four five six seven now i'm not saying that this is happening on purely dns level there could be like internal load balancers within aws itself so aws might just responded to 1.2.3.4 but internally aws can properly configure and redirect these requests appropriately on any compute instance which they have right in this case when you have this architecture you actually lose a lot of things this is actually very very lossy feature lossy in a way because you lose the access to your linux image a lot of times for example in case of cloudflare workers and these things which give you the zero boot environments cloudflare workers worseledge functions which also use cloudflare workers so these things actually even take away the node environment from you so why this is happening because in order to execute whatever you want to get executed really quick and get the results out you have to create the full environment from scratch every single time right so this is more like just like you know encoding when you lazily import a code it is the same thing with lambda functions your infrastructure is lazily allocated to you now because your infrastructure itself is lazily allocated to you this means that your code layer should execute super fast that means they abstract away a lot of things they don't give you a proper linux environment to boot they don't give you you know in cases node.js environment even they just want the source code and they want to execute it as soon as possible and then just return you the output right in this case you would have your api which is executed the moment you hit this lan this url this lambda functions provisions certain hardware on the aws stack executes it and returns it now you might be thinking that when lambda actually executes and provisions all this infrastructure we might be losing a lot of time in the provisioning part compared to the ec2 part because here we already have the you know the infrastructure with us the app is already running ready to respond and so on and that is true if you're thinking about that in this way that is 100 true serverless is slower than server full architecture and it will be slower because it comes at a cost it comes at a cost that you are provisioning the infrastructure on demand this is slow but this is comparatively okay i mean if this api endpoints might take you 20 milliseconds to respond this might take you 20 plus 100 milliseconds of boot time in worst case scenario right or maybe in worst case let's say it's 500 and then it drops to 20 plus you know close to like 20 because aws has a lot of built-in optimizations for itself but even for example 500 if you take 520 milliseconds as an example and you can i mean you want the scalability factor baked into it why because like i mentioned aws can properly load balance it and you know allocate it into different compute instances and run a lot of these code instances parallelly so you basically get 520 for a million clients compared to in this case in ec2 you get 20 milliseconds for 100 clients then 50 milliseconds for the next thousand clients and then 500 milliseconds for the next 10 000 clients and then five seconds for the next million clients right so you see the drop off here so let's compute might be able to handle one million connections one million requests but whereas in server full you actually have to build this loader or you know load balancer yourself in order to actually gain you know at least keep the same performance and this is just an example don't take it on hard but that's that's what i was showing you in terms of comparison where serverless actually helps you now another question which another thing which people mention a lot of times is that serverless is actually cheaper and server full or in this case like running an ec2 instance 24 7 is actually costlier but that is actually a lie because if you are a big organization or if you are someone who actually needs compute power almost 24 7 then actually running an ec2 instance is much much cheaper much much better than using lambdas if you're if you know that you are doing your calculation right because even if you go ahead and run run some numbers whatever expense you get for running ec2 is 24 7 if you compare that by you know running lambdas 24 7 then lambdas are much expensive than ec2 and this should be the case why because aws is doing a lot of work for you and we know that whenever you go up in the abstraction layer and you are using all the resources then the price actually increases right if you're using a higher abstraction sas product software the price is much more compared to using a lower abstraction thing so what should you use well there is no right or wrong answer here like we discussed for some cases i think for most cases lambda would actually be cheaper because you would never be really running lambdas 24 7. uh that's where the cost benefit comes in that it runs very infrequently or you know just for a few seconds at max per minute so yep i mean it depends on what your requirements are and everything but i hope this video gives you a clarity into what exactly is the difference between ec tools and lambdas which is server full versus serverless architecture and this is not again limited to aws or any specific cloud provider every single cloud provider these days provide some sort of server full compute versus serverless compute so you can extend it to seo or google cloud platform network functions for example anything so yeah that's pretty much it for this video hopefully you understood something new something important that is all for this one make sure you leave a like and subscribe to the channel i'm gonna see you in the next video really soon if you're still watching this video make sure you comment down in the comment section i watched this video till the end also if you're not part of code dam's discord community you are missing out a lot on events which we organize on a weekly basis to code you already know the drill make sure you like the video subscribe to the channel if you haven't already and thank you so much for watching
Info
Channel: Mehul - Codedamn
Views: 39,819
Rating: undefined out of 5
Keywords: mehul mohan, codedamn, api, serverless, serverful, serverless and serverful models, serverless and serverful backend, what are serverless models, what are serverful models, difference between serverful and serverless models, backened, frontend, fullstack, web development, full stack web developer, learn full stack development easy, learn programming for beginners, technologies in 2022, techstack, programming languages, python, html, css, java, javascript, c++, c#, code, coder, coding, coding life
Id: 90pVRK49AQM
Channel Id: undefined
Length: 11min 49sec (709 seconds)
Published: Fri Feb 11 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.