Open Source Bootcamp - Complete Docker and Devops Roadmap - Part 1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right so the 100x devs YouTube boot camp hi everyone we're finally releasing a series of educational videos the goal of this series is going to be learning high-paying technologies that lead to a job in 2024 that are not already out there on YouTube I released a video a few months ago and the video had a bunch of comments the most uped comment here was the docker and kubernetes past was interesting make a dedicated video for the same that is what this and the next video is going to be from there the playground is open the comment section is open let me know what you'd like to want the most upvoted comment is what we will learn a few things that come to my mind are some Advanced react some Advanced M Stack U more devops Concepts web 3 AI is something that I'm learning very fascinatingly these days um low latency trading system these are six things that come to mind I feel this covers a broad aspects of things that lead to hyping Tech job jobs inside this what specific technologies that can be broken down into 2 to three hour tutorial videos comment down and the most uploaded comment is what we'll try to choose and learn through this series the goal is at least one video maybe two videos a week of pure educational content and you will see hype content on the channel here and there but mostly the channel is going to Pivot at least for the next one month to a bunch of educational content because I have some time now and I'd rather spend that time teaching rather than you know creating hype videos which I think we have enough of now cool this video is going to be complete Docker boot cap part one again this is based on feedback based on what the last video's most orted comment was this video's comment section is open let me know what you wouldd like to learn and after this in the next video that is what we will pick Let's get right into it let's understand the complete Docker boot camp part one why Docker why are we learning this specific technology there are a few reason reasons reason number one is it was the most uped comment in this video the doger and kubernetes past was interesting make a dedicated tutorial on the same that is what we're doing here only Docker not kubernetes if you want kubernetes comment section is open the other important reason for this is one it makes it very easy to set up open source projects locally in fact at this point I would urge you to go to two links github.com cal.com Calcom cal.com and this other link both of these links I will Link in the description if you go to both of these Links at the very top of the source code you will see a Docker file and a Docker Das compose file try to read through that try to see if you're able to make sense of it and then go through this video and the next one part two of the docker tutorial and then try to open both of these files and see if you learned something through throughout the series we are heavily going to focus on open source because checkpointing your knowledge to open source code basis is very important that's how you know you've learned something and you can actually apply it to the real world rather than building kidy projects on your own the other big reason for learning Docker is you can dockerize your own applications using it and you can also learn to deploy the applications that you have containerized on the internet using Docker you can deploy on the internet on AWS without Docker as well but no one really does it most people or most production worthy Technologies SL applications SL companies deploy their applications VI Docker that is another thing we'll try to learn through this specific series video one and video two again if you remember from the video the two important things that I highlighted there was one dockerizing your own fullstack applications two setting up an open source code base locally there are a bunch of other things that we'll cover but at a high level the darker part of the specific video covered these two things these two things are what we'll try to understand in this specific video in the next one what are we more specifically going to learn in part one which is this video number one what is containerization what does it mean why did it come into the picture the history of Docker why was Docker introduced what is this company do they make profits or not how do they make profits how did they create a fire in 2013 2014 and today you can't really think of any open source project or otherwise to exist without docker how to install Docker and play with it locally containers versus images the first buzzword when you're trying to get into devops / understanding Docker what are containers what are images what are the difference between both these terms a very important interview question as well creating a simple full stack application and introducing the docker file now if you don't Know full stack it is fine we'll be creating a very simple M Stack back end or like a Javascript file basically I'll basically tell you what each and every line means so if you don't know the one stack is fine if you don't understand back end applications is fine we'll create a very simple full stack application and we'll understand how you can create the docker file for it or basically how you can containerize this full stack application the same knowledge can be applied to more complex full stack applications but for now for this video we'll create a very simple full stack app and then containerize it containerizing the back end of that application deploying that container SL image something to this thing called dockerhub which is very similar to GitHub but for Docker and for images pulling an image and deploying it on the internet so here is where I'm introducing a few word SL Technologies SL buzzwords an image deploying it on the internet what does this mean basically just how you might today push your code to GitHub and pull it from GitHub to an ec2 machine AWS machine gcp machine similarly creating a Docker image pushing that to not GitHub but Docker Hub then pulling that to your AWS machine and seeing it work on the cloud there lastly there is there will be a few flaws in this approach the thing that we'll be doing how do you fix those flaws will be left as assignments most of my videos SL tutorials come with assignments these are assignments you should solve they're on GitHub open source try to solve them try to create a pull request there are solutions already present out there don't look at them and in the next video I'll be covering the Solutions in case you weren't able to solve them cool this is a high level of what we're covering today in part one in part two we'll try to cover what is caching and how what are layers and how Docker caches layers and how it saves you a lot of time what are volumes and networks something called multi-stage builds understanding a new file called the docker compose file what it does why it is used why did I say in the beginning there are two files in open source code bases you should look at one the docker file and one the docker Dash compose file this is the thing we learn today this is the thing we'll learn tomorrow adding a database to our backend so a little bit more full stack we will do because this part is important to understand Docker compose creating a Docker compose for our back end and then going through open source projects specifically the ones that I showed in the beginning of the video and making sure we're able to understand everything there that is what we'll cover tomorrow slash in the next video part two part one we'll cover the things that I mentioned before this slide cool let's get right into it this is where I would urge you to not copy pen but like start taking notes or just create a mental model of what I'm talking about why containerization why dockerization why was it introduced number one let's look at the meme first the meme says it works on my machine and then the senior engineer says well then we'll just ship it on your machine this might be something you might have faced if you like currently work which is something works on your machine you push it to GitHub or wherever and then some other senior engineer is like hey it doesn't work on my machine and then your reply the wrong reply is well it worked on my machine the right reply here should be I'll check WhatsApp or like something else but usually it's like developers are very finicky if something works on their machine and then someone else says oh it's not working there's a bug here the developer will just reply with oh well it worked on my machine this is a problem and this is one of the reasons containerization is so cool as you can see the meme says that is how Docker was born what does this whole meme mean basically means what Docker lets you do or more specifically not just Docker what containerization means is everyone has different operating systems you might have Windows I have mac someone might have Linux and the way to set up code bases here is very different the way I install nodejs in my Mac machine is very different from how you install nodejs on your Windows machine there might be a bunch of other auxiliary dependen if you're trying to set up a codebase locally things like a database things like Kafka things like reddis and you have to set up all these mini projects on your machine the way to do it is very different on a Mac machine on a Linux machine every operating system is different and steps to run a local a project on your machine by which I mean an open source project or any full stack project machine learning project can vary based on the operating system it is extremely hard to keep track of dependencies as the project grows if your project did not have mongodb and someone added mongodb all the developers in the team now have to install mongodb on their machines every machine has a different way to install mongodb that's a mild inconvenience what if there was a way to describe your Project's configuration in a single file now what does configuration mean here configuration means what database are we using uh what dependencies do we need what version of a dependency do we need do we need nodejs s 18 do we need nodejs 19 what version of java do we need what if we could write all of this in a single file and I've been talking a lot about this file this is what's called the docker file where you describe all the things that your specific container needs what if that could be run in an isolated environment I am on a Mac machine you might be on a Windows machine if this is my Mac machine what if I could run my application in an isolated environment which means this is my Max file system and there's a mini computer running inside it which cannot affect anything outside it runs in a very isolated environment if there is something buggy there it does not have access to my whole machine I can restrict how much RAM CPU this thing can use what if there was a way to run a project an external project an application in this containerized fashion this would make the setup of Open Source projects a breeze why because an open source project could would have 10 different dependencies let's say there is an open source project that has mongodb as one of the dependencies and MySQL as well then you have to first install mongodb look at mongodb's installation processes install that on your host machine do the same for MySQL then maybe run node index.js whatever is the way to start the project now let's say you're done with the project you delete the project but you still have mongodb you still have MySQL that is bad what if this was running in a single isolated environment which is what is called a container that way you bring up the container that brings up mongod that brings up MySQL that brings up the code base and starts everything what if you run a single command and it destroys the container which means it destroys mongodb MySQL and your nodejs code that is running that is the reason why it makes the setup of Open Source projects a breeze because you don't have to install independent dependencies everything comes in a single command where is all of this defined this is defined in a single file called the docker file lastly it makes installation of auxiliary Services easy basically databases mongod DV Kafka whatever Services your full stack application uses for example uh if you look at the codebase of cal.com it uses postgress of as the database it uses node J as the runtime environment uh it might use a bunch of other dependencies that I'm forgetting now bringing all of this together to your machine and run a single command on your machine that starts a container where cal.com codebase is running and when you're done with it Bringing Down the container is the benefit of containerization the other benefit is it runs the same on every operating system irrespective of whether you're on a Windows machine a Mac machine or a Linux machine you all have to run a single command which is the same command that command will create the container and run the container for you that is the high level of why you need containerization and dogger is one company that came into the picture to solve it before that an official definition of containerization containerization involves building self-sufficient software packages that perform consistently regardless of the machine they run on it's basically taking a snapshot of the machine the file system and letting you use and deploy it as a construct take a snapshot of a machine which has mongod DV MySQL the code base Kafka take a snapshot put it out there on the internet anyone can pull that snapshot and run it on their machine that is what is containerization what is Docker though Docker is a company that realized this is a huge problem they envisioned this will become the norm and eventually everyone will start containerizing the application so they created a standard for it it's a very opinionated standard which means it is one way to create containers and it sort of became like a buzzword everyone's today creates like most people create like Docker coners there a bunch of other technologies that you do containerization Docker ended up becoming the most popular one it was introduced in 2014 caught fire 2015 onwards today almost every open source project that I look at uh has a Docker file and makes like the lives of developers easier most open source projects have Docker files makes your life easy when you're trying to set up the project locally makes it easy to deploy a container so this is like the other benefit of containerization 05 and 6 are super important one they let you deploy projects very easily on the the internet how we'll see and two allows for container orchestration which makes deployment a breeze now what is container orchestration if you remember my video here doer was only the first part infrastructure code was the second part kubernetes was the third part and kubernetes is what lets you do container orchestration basically means it lets you once you've created this mini computer SL container which has all your dependencies it lets you put them on various machines be it an AWS machine uh Google Cloud machine and Azure machine irrespective of what your cloud provider is you run a single command uh to create the docker image and then you can orchestrate where to deploy how to deploy using fancy Technologies like kubernetes that's another big benefit that you might not immediately see until you start deploying applications using Docker right now you might just be focused on like setting up projects locally eventually when you either become devops engineer or you know have you have to containerize your application and deploy it cuber netes is what becomes super important and dogger is the technology that lets you do it of course there are a bunch of others this is the most popular one cool here this point we know enough Theory install Docker on your local machine the instructions are right here I have already run it but if I had to run it again I would basically go to whatever is the URL here just Google uh Docker installation and then here there will be be operating system specific um steps let's see one for Macos I think on Mac it's as simple as Brew install ah there you go select your operating system here for example if you're on Ubuntu select this and run a bunch of commands once you do by the end of it this is the command that you should run maybe without pseudo so Docker run hello world oh Oopsy Daisy connect timed out make sure your streaming service okay live streaming so Docker run hello world this will run like a very small container on your machine and it's like a test container which is how you know whether or not Docker is running on your machine locally right now as you can see I ran Docker run hello world and it says cannot connect to the docker demon at some specific socket address the Reas reason is I don't have Docker running locally I did install it I I don't have it running locally which is what I'm going to do next the way to do that on Mac I installed it using the docker GOI I just had to like open the GUI and now if I run Docker run hello world as you can see it says unable to find image hello world locally then it says pulling the image from library runs a bunch of things and finally the important thing to see here is Hello from Docker as long as you see hello from Docker after you run Docker run hello world you're good to go if you see some other error that means means there is an installation error fix that first and only then proceed in the video else you won't be able to follow along what's happening and then then there's no point you're stuck in tutoral h that is something we don't want so make sure you're able to run Docker and hello world if you're able to run Docker and hello world which means you have Docker installed locally in case even if it is Pudo Docker run hello world that's fine as well this means uh Docker needs some extra permissions to run locally which is fine for this tutorial eventually at some point you shouldn't use sudo you should simply use like give it the right permission so it can run without sudo sud sudo basically stands for super user do which means Docker needs some extra permissions to run locally but even if if this is the case it's fine whatever commands I'm going to tell you to run from now on just add a Pudo before it uh but preferably try to make sure one Docker as a command exists here so Docker CLI gives you a bunch of things but more importantly Docker run hello world runs on your machine when it runs hello from do is what you see cool after you have done the installation we'll understand what is exactly inside Docker there are a few parts a few important parts which are we going to focus on the three important parts that are inside Docker are number one the CLI which stands for the command line interface which is the thing that you just saw when I ran the docker command in my terminal that is what's called the CLI it stands for command line interface interface and the name itself says interface which means this is one way for you to run Docker commands you can run it through code as well through a bunch of other clients CLI is the sort of popular way to run Docker commands what does a Docker command run when it runs what thing does it hit that's the engine and finally something called the registry or like Docker Hub or a bunch of other registries where you can deploy your Docker containers more specifically your Docker images too much information let's take a pause and let's just focus on the first one CLI what is the CLI this Docker right here this is what's called a command line interface ignore the PS over here just Docker this is what's called the command line interface if you look at my machine here if I run Docker it gives me a bunch of sub commands that I can run it says you can run Docker run Docker exec Docker PS Docker build so these are the commands that I can run if I run rocker PS it gives me a bunch of things which you don't have to worry about just yet that is one of many commands that the docker CLI lets you run let's move to the second part now the engine the engine is the main part of your Docker basically the docker code base if you look at it on GitHub the thing that actually runs mini containers mini machines the thing that lets you actually create images it's what is What's called the docker demon SL the docker engine the CI that you have is just a way for you to interact with the docker demon you could also write some go a code that lets you run these Docker commands for example Docker PS Docker image Docker build you can also use the CI to interact with it but both of these finally talk to the docker demon this is the thing that is running which does the management of your images let's you create images let you let's you create containers volumes networks a bunch of other things the probably the most important part of Docker the meat of where the docker code base runs is the docker demon or for the docker engine lastly the way that Docker makes money is through something called Docker Hub there are a bunch of other ways but the most important part is dockerhub it's basically their alternate to GitHub what does GitHub let you do GitHub lets you put your source code into a centralized place on the web where people can pull your source code from there Docker Hub is very similar only when you're deploying on to dockerhub your not deploying code you're actually deploying an image what are you deploying an image what is an image you will see very soon but there is a very big difference between GitHub and dockerhub if I go to whatever github.com hirat SL doer road map this has the source code which means it has a bunch of JavaScript files typescript files HTML files if I go to Docker Hub and let's say look at an image this does not have the source code well it does have the source code of mongodb it has a bunch of extra things there it has like the file system needed it has the networking setup a bunch of other things we will see very soon how we can create these images but what you need to know at this point is okay dockerhub is one of many ways how Docker makes money and what does Docker Hub let an and developer do once a developer has created an image on their machine they can deploy it to Docker Hub and then easy to machines AWS machines gcp machines can pull your image from dockerhub onto the AWS machine and run it immediately very similar to GitHub and the only thing to sort of keep in mind here is or a few gotas to know is Docker Hub isn't the only registry that's out there there are a bunch of others AWS has its own registry gcp has its own registry Docker is the most popular one because it is created by the creators of Docker but this there's this more General concept that you should know of which is called a registry which is this thing right here a registry is a place where you can deploy an image dockerhub is one of the Registries AWS has its own registry gcp has its own registry where you can deploy Docker images Docker up is the most popular one and what does a registry let you do this is the registry and this is your machine you create an image locally and then you deploy that image to the registry everyone all around the world can look at that image for example I can look at the image here and in case they want to pull that image locally and run it they can run a bunch of Docker commands and run it locally cool these are the three important Concepts to know like the high level concepts of what docare provides you now let's get to images versus containers another big checkpoint here we understand AB what is Docker what is containerization and three high level things that Docker provides you the CLI the docker engine and Docker Hub now we're going to understand two mild Concepts images versus containers and a very popular interview question as well after we understand this is when we'll write a bunch of code and start to create images locally deploying them to dockerhub all that cool let's look at the official Doc uh documentation of what is an image and what is a container a Docker image behaves like a template from which consistent containers can be created if Docker was a traditional virtual machine the image could be like the ISO file used to install your VM I'll take a pause here first important thing the docker image behaves like a template which is a consistent from which consistent containers can be created what does consistent mean it means irrespective of whether you're running it on Windows obuntu or a Mac you all three machines will see the same outcome similar to if you run the docker if you run the docker run hello world command here what I see is what you might see is what someone else or a different OS might see because the image that is created is consistent from which consistent containers can be created if the do was a traditional VM image would be like the ISO file now I don't know if you've tried to install U Ubuntu on your machine ever if you have your friend might have given you a ISO file in like a pen drive that is one way to install virtual machines on your operating system most specifically like What's called the hyper the thing that's below the operating system but VMS differ from Docker Docker is like significantly different from virtual machines and both in the concept and in implementation but it's a good way to understand what are images and how they differ from containers what are images the ISO file your friend gives you in a pen drive if you're trying to ever install obuntu on your machine what is a container container is when you run that image locally and that's actually executing on your machine that's what's called a container that's the high level difference between images and containers we will dive a little bit deeper very soon but until this point what is an image an image behaves like a template from which a consistent container can be created images Define the initial file system state of a new container which means the image if you look at it it will contain the file system the code base the network information a bunch of other things it's significantly different from just the code base this is what you use to push to GitHub this is what an image contains they bundle your application source code and its dependencies into a self-contained package that's ready to use within the container runtime within the image file system content is represented as multiple independent L weing over this last line This is a high level of order images and containers feel free to take a pause here and try to just read through this try to marinate it in the next five to six slides we'll try to understand images and containers in more depth cool images versus containers let's take a more practical example if this is your local machine a Mac or a Ubuntu machine or Windows machine you might have an index.js file or a python. py file or an index.html file whatever file if you've ever run code base on your machine before the command that you run is something like node index.js you may or may not have run this this is one way to run applications but what are we learning today we're learning about containerization and dockerization so what is the way to create a container from it step one is just creating an image the docker build command lets you take this index.js file and describe okay what all do I need along with the index.js file and create an image from it what is an image it contains your index.js file it also contains this is an image which should have nodejs in it this is an image which should have a file system in it this is an image uh that should expose a certain Port maybe if you don't understand the last line here feel free to ignore but this image is significantly different from just a file a file is like much easier to run locally an image is slightly harder to create because it's an independent entity which has not just your source code but also all the dependencies your source code needs to run why is this useful if you remember from the initial part of the video I said the whole point of containerization and dockerization is can you create some self Centric container and this container is what you can send to your friend who's running a Windows machine and they'll be able to run the codebase you can send it to a friend who's running a Linux machine and they'll be able to run the codebase this is a self-contained system which has not just your codebase it also has a bunch of basically all the dependencies that your codebase needs to run versus container is when you actually run an image locally similar to when your friend gives you an ISO file in a pen drive when you run it locally and you seeun running is when you can call it sort of a container and like the high level difference is you only create an image once because well there's no point creating the same image twice it's like having two index.js files that look the same but a container you might run more than once you might ask why would I want to run the same index.js file twice for auto scaling maybe maybe uh there's a lot of load on your machine so you want to run two different containers that your end users can hit and that is when you run like Docker run twice and that gives you what two containers so what is the difference between images and containers image is a thing you create once which has the information in like a like a 1GB file that you have like a 2GB file that you have which has your file system your mini operating system kind of a thing networks code base a bunch of other things when you run that specific image using the docker and command is what's called a container that's the an image in execution is what's called a container this image is what you send to dockerhub when I showed you dockerhub the mongodb image recently the image is what the developer of mongodb image pushed to dockerhub similar to how you push your source code to GitHub you don't push the containers onto dockerhub similar to how you don't like run node index.js on dockerhub on sorry GitHub GitHub is meant to store your code base dockerhub is meant to store your images once your image is there you pull that image to let's say an AWS instance or you give it to a friend you point your friend to the specific mongod TV image and you tell him bro you can run this locally and that's contains my source code and everything else you need to run it locally that's the benefit of Docker Hub and that the image is what gets stored there when your friend pulls it locally and runs it let's say they pull that specific image and then they run Docker run image ID is when they will see these two containers if they run it twice they will see it twice if they run it thce they'll see it Thrice let me just run one of these so we have like a practical example here uh this is the mongod DB Docker image I can run Docker pull which will pull the latest Docker image for and when it once it does it'll take some time not too much I can simply run Docker and I should give it a bunch of other arguments but just running these two commands Docker pull followed by Docker run will one pull the image to my machine two run the image which means this is what's the container these logs come from a container and this is the image that was just pulled from Docker Hub good checkpoint here we understand hopefully all right so at this point I would urge you to read through this again um the difference between images and containers high level image is what you create once from your code base which contains your codebase your file system uh your networking information a bunch of other things a very big 1GB file a container is when you run that image locally when you actually see the logs when the service is actually running locally if it's a website you can actually hit that website and get some content back that is what's a container that's how it's different from an image cool let's proceed to coding I think we've talked a lot now too many abstract things so let's try to code at this point I would urge you to go to github.com devops no Docker road map will have a bunch of files just focus on I mean just download it first or clone It Whatever click on download zip here this should bring the code based locally on your machine in case it does not bring the code base if you want to go down the other route you can use a bunch of git commands to run this locally I'm not going to cover that very straightforward stuff if you've already interacted with GitHub in the past once you have this locally I'd urge you to open just just one- simple app it has a bunch of things here we'll try to read through it and understand what it says okay I'm going to go to part one SL one simple app locally would urge you to clone the repo and reach here as well and here I'm going to basically take you through the structure of what these specific what this project looks like and what these files do before we sort of proceed into uh creating the docker file ignoring this Docker file file here other than that you'll see four files here or five read me feel free to ignore the index.html file which is a basic HTML file it says hello world in the title hello from a website in the body and I want to be exposed via container it's like a very basic website if I open it in Chrome it looks something like this cool the important file is the index.js file which is a very basic Express HTTP server now if you don't know what Express is I'll give a very high level briefing of it if you already know it feel to skip through this part Express is one of many HTTP servers what is an HTTP server hirat it's basically a way for people to expose information on the internet that's very abstract so let me maybe take a moment to explain this um there are a bunch of machines out on the world when you go to like https colon sl/ google.com what's happening under the hood is there is a request that goes out to a server somewhere in the US a Google server and then that returns back some HTML to you and that is how you see HTML content in your local machine um if I open google.com here you can actually sort of confirm this you right click click on inspect click on the network tab here it will show you the HTTP requests that are going out the important one to focus on is this first google.com request that goes out as you can see I mean you can if you don't know understand this part feel free to ignore it at a high level whenever you go to any website here there is a request that goes through the wires here onto a machine in the US in the UK in India wherever and some response comes back from it um the very first request that goes out gives you back some HTML so if you go to the response here you'll see a bunch of HTML here this this is what we're trying to emulate locally which means we're trying to create an HTTP server that can return back some HTML now you might askat why do we need to do this when I can simply just open the HTML file here and the answer is when you open the HTML file here this is what you see in the URL bar and when you ever sort of expose things on the internet whenever you go to a website you never it never starts with SL user SLK or like it never starts with file co/ SL basically it starts with HTTP or https something like this that's how we want to deploy our application in case we have to do that we have to create an HTTP server there are a bunch of ways of creating HTTP servers Express is sort of the popular framework in JavaScript / noj that lets you do it and we've created a very simple Express server here the important thing to note here or like the important part here is just this bit it says whenever request comes on slash send back the index.html file bunch of other boiler plate code here which if you really want to know what this does there is a web not a web map uh airat Sing full stack road map which is this this video right here basically explains it very well so feel free to go here and look at what expresses how it lets you create HTTP servers I'm not going to explain that the important thing here is when I start this HTTP server which means if I run node index.js which basically means if you want to run this locally you need to have node running locally on your machine this is the reason we're introducing Docker here because if you want to run this locally without Docker then this is the command you need to run and for that you need to have node installed in your machine locally steps are fairly straightforward if you want to try it you don't have to try it because eventually we're going to use Docker to run it all you need in your machine is Docker everything else is sort of automated tool if I run node index.js it says cannot find module Express because the first command I need to run before that is npm install when I run npm install and then I run NP node index.js is when it says an app is running on Port 3000 again if you want to understand all of this very well then just go through the road map the full stack road map but now if I go to loost colon 3000 as you can see it again basically started from HTTP col sl/ this looks very similar to a Google website some differences here it says your connection is not secure here it says it is secure you don't have to worry about that just yet the thing is we have created an HTTP server the problem is the HTT P server requires you or anyone else if this was an open source project they would need to have node installed locally which is bad this is why we need to containerize this application because if we containerize it if we create Docker files for it other people don't need node in their machine they can simply use Docker to run uh the whole thing this whole project this Hello World website locally uh and that is the way to do that is through what's called a Docker file let's I just take you through the the other files here um index. HTML we saw index.js we saw package.json is like the only other important file here you can ignore package dlog Json um package.json is basically a place where you write all a bunch of things for your nodejs application the most important part maybe here might be uh this dependent dependencies section which here it says express is a dependency what is Express as I've said it's an HTTP server if you want to D more I cover all of this in the full stack road map that's on YouTube already all right cool checkpoint we coded a little bit I took you through the code base of what we'll be deploying today or what we'll be containerizing it's a simple express application that exposes some HTML code or on the slash route on a specific Port more specifically the 3,000 Port the thing that we need to do from here is containerize this application create the docker image for it run the container locally and visit the website in a containerized fashion not run it on my machine like I was running it right now running it inside a container how do you do that and the answer is this is our application the way to containerize it is by creating what's called a Docker file all right so the docker file the most important slide maybe of this specific video this is where you spend a lot of your time if you're a devops engineer this is where you describe to run your fullstack application what dependencies do you need what external libraries do you need what all ports do you need to expose what final command that you need to run to convert your image into a actual running container where can you find your JavaScript files all of this is described in a single very long file called the talker file and where does this reside this resides right next to your source code this is my source source code for my application the index.js file the package.json file right next to it is the docker file which contains all of this configuration for this specific project this is what the configuration looks like let's go over all the things from top to bottom the very first thing says from node 20 the first line usually of any Docker file is going to be from space something the something on the right is What's called the base image base image represents okay you are creating a very big image from here onwards what is the base image that you should start from usually this is something like Ubuntu like a very basic Ubuntu uh operating system image which has everything that Ubuntu does so if you install Ubuntu from a friend the thing that you get there the same thing you get if you at the very top right from MTU and a version of it but since we know this specific application is a not just application why not use a node base image that some very smart people have already created over there if I go to dockerhub can actually just search for dockerhub image um nodejs and this will give me the node official image which basically is an operating system a bunch of extra dependencies that have node npm NVM all the right set of uh libraries needed for you to run a nodejs application similar images exist for goand for example this is the goang base image if you don't want to do all of this you can always be like bro I'm just going to use the um Ubuntu base image or there's a scratch base image which is basically like literally nothing from scratch or there like more lightweight versions of operating systems one the famous one is called Alpine you can write that and right below this from step is where you can write the installation steps for node on your machine if the first time you install node and npm you somehow install NVM which is like the node version manager on your machine then you do an NVM install node or whatever you can write all these steps here in the docker file you can bring in NVM do an NVM install node if you want the base image to be Ubuntu or Alpine if you don't want to be this fancy but generally if you have a nodejs application you will use like the node base image and just write the specific version of node that you need this has been written by really smart people who have like deployed the official node image there what does it have it has the file system and all the dependencies needed for node to run on that file system The Next Step here is What's called the work D or the working directory this represents where exactly do you want to work when you run a bunch of commands after this this is basically the base directory in inside the image where you want to pull your code run the commands all that Jaz so if you look at my file system right now I'm in what folder I in/ users /k/ proex Dev SL devop series part 1/ one simple app this is the folder that I am in similarly what what are you containing you're creating like a mini computer right you basically tell in the mini computer this is where I want this is my working directory this is where I want to pull my code BAS is run my code base from all that CH when you put a working directory and then you run something like for example copy dot dot which basically means copy everything from this folder into the docker file or like the docker image this working directory is where that code will go so these two lines if you combine them what do they do they change the folder inside the docker image to this base folder and secondly they copy over everything from this this folder which means the docker file the index.html file the package. js on the index.js all of this gets copied over where inside your image if this is your final image that is being created in the/ user SL Source SLA folder is where we copy over all the source code and put it where inside the image we will later be able to like SSH into one of these containers that are running this image and we'll actually see that the/ us/ Source slapp folder actually contains all of this code for now feel free to ignore the high level thing to know is this copy command will copy over all your code base into this working directory and why are we doing this because we are creating the image this is the file that represents the configuration of your final Docker image so you need to put all of your code there because your code is what finally runs cool moving on the next command to run here is npm install now this is slightly debatable some people say K you should run npm install here what does npm install do for the people who don't know it creates this folder called node modules which contains all of your external dependencies what are external dependencies anything that's mentioned here in package.json in the dependencies section which is in our case is Express is what you bring from a very like open registry onto your machine andun that single command creates node modules now some people argue okay by you should run npm install here and then after you have run npm install is when you should create your image so when you run copy node modules will move to your working directory automatically so you don't really need this step but ideally you want to create a fresh image which means you don't want to copy over node modules into your image you want to copy over just the source code that's written by hand and inside the image is where you want to run npm install and bring node modules there why there are a few reasons the biggest reason is okay um when I'm sort of bringing in node modules here I might bring in let's say a few C++ or C C files which then get built on my Mac and then the way they're built on my Mac might not be the way they need to be built on a Linux machine which is what we're sort of working with here if we're using like the base version node whatever 20 though whenever npm install happens basically there are a bunch of os specific things that happen and since we know we're running maybe Linux or muntu or whatever and that is written here in the docker file why not run npm install directly inside the image in that specific environment rather than running it on my Mac and then copying over node modules so interview question should you copy over node modules from your file system over to the docker image the answer is no you should just copy over the source code you should copy over package.json and inside the images where you should run npm install cool uh one way to sort of ensure that you never copy over node modules into your Docker images uh create a DOT Docker ignore file which I already have and inside this port like node modules whatever you have inside the dot doer ignore file won't be copied over whenever the copy command runs so if you have something in do doic node it won't get copyed over whenever uh the copy command transer something is being copied again what does dot mean here dot means everything if I wrote something like uh copy do/ index.js do it would mean just copy the index.js file here so I would also write something like copy. index.js and package.json over to the image so the thing on the right represents inside this working directory where where do you want to put the files the thing on the left represents what files you need to copy over there again in our case we're just going to do copy dot dot which means in this working directory put all of the code that's present here that includes everything including the docker file which is something you don't really need inside the image but it's fine you can also put like the doger file in doer ignore I think that'll be fine as well uh let just do that real quick cool so that is Step number four step number five is exposing a specific Port now this is something new um as you saw when I was starting the index.js file I saw an HTTP server was created on Port 3000 and whenever in my browser I went to Local Host score in 3000 I saw the website the thing is containers are not always HTTP servers containers could just run some very heavy algorithm and then die down they don't necessarily have to be long running HTTP servers which means they don't necessarily have to expose anything whenever these fancy Technologies like containers are created Oz uh whenever these containers are created um you don't want to give them too much power which means if this container says I running something on local than 3,000 the machine shouldn't really send all the requests coming on Port 3000 to The Container but the container should expose that quote and then the machine can decide whether or not to actually send requests there but the container's job at the very minimum is to tell this is a port that I am exposing so if there is an incoming request that's coming my way please send it to me the Mac machine the host machine that's running the container can decide whether or not to forward the request there and that's what we're doing right here the container is exposing Port 3,000 if the index.js file had a different port here rather than 3,000 that is what we would expose in the docker file cool let's go from top to bottom base image working directory copy over the source code run npm install to add node modules and expose Port 3000 last command is CMD basically represents what command to run when the container is running big big difference between all of these and this single command all of these run when you're creating the image the 1GB file this runs when you're running the image which means when you're starting the container this does not run when you're creating the image this only runs when you actually start the image and create the container that is again interview question the question usually is what's the difference between CMD and run the answer is you can have multiple runs a run basically means install a bunch of things uh bring a bunch of things on the machine if you were installing let say goang as a dependency you would use run and like bring goang goang onto this machine CMD means before the image gets converted to a container which means you actually start the container run this cool this is the general structure of most uh Locker files that you'll see they're much bigger there are a bunch of other things that we'll discuss in the second part but at a high level this is a basic one for nodejs a goang one would look very similar it might have an extra step of compilation but other than that look pretty similar all righty so hopefully we understand what a Docker file looks like and at least understand the docker file for our basic full stack application next question is how can you create an image from this Docker file how can you run the image from the docker file fairly simple commands this is how you build an image so let's run that command it says Docker build dot which means the folder in which the docker file and the source code is dasht a tag name it's like an image uh an image an ID not an image image an ID that you can give to that specific image so let me give it an idea of what not exactly an ID it's basically called a tag let's tag it with uh test app and that's it this single command will create your image which means it will bring the nodejs base image locally to my machine the nodejs 20 image it will copy over my source Cod run npm install expose the 3,000 port and create a very nice image for me locally as you can see it takes some time because well step one of four which is bringing from docker.io library node at 20 it's taking some time rightfully so it's a big image that's coming from Docker Hub to my machine let's wait for it to finish as you can see it still says 1x4 and if you don't know what one here is or 1X 4 here is one is this step this is 1x 4 this is 2x4 this is 3x4 this is 4x4 these are the like four high level steps to create this image um this one isn't really a step just exposing the port and this one again as I said does not run when you're creating the image this only runs when you're running the image took almost a minute still fetching the image from talker Hub 2x4 work d 3x4 copy out the source code 4x4 run npm install by the end of it our image has been ped to col a minute let's run all right let's run Docker images and as you can see amongst the many images that I already have I also have the test app image now which was created 2 minutes ago and it size is 1.1 GB this is the image that we just created and the next popular command that you might have been waiting for is how I can I convert this image into a container which means how can I actually run this image and the answer is uh let's see what was the tag of the image it was testore app the way to run it is Docker run testore app that's literally it this command will start that image which means it will start a container from that image and as you can see it does say example app running on Port 3,000 which means if I go to my browser if I go to Local Host colon 3,000 will I see anything I should have but I don't see anything why small caveat to discuss whenever you're running images right as I said if this is your Mac machine Mac machine doesn't really want any random container to start and any requests that's coming to Local Host colon 3000 which means any request that's coming to the Mac machine on Port 3000 to get routed or automatically to a container a container might be like bro I'm exposing something on 3,000 there might be another container that says bro I'm exposing something on 3,000 it depends on the Mac machine okay where does it want a request that's coming on 3,000 on Mac to get routed container one or container two how do you describe that the way to describe that is by using an argument here so let me stop this the argument is- p which means the port most specifically the port mapping what have I mapped here I've said Docker run test app but dasp 3,000 col 3,000 which means the 3,000 Port of the Mac machine should forward all requests to the 3,000 Port of the container that this specific command is starting so now if I run this and if I go to Local Host score in 3,000 I see Hello from a website what is the new thing that I added I simply added another argument here- P I said 3,000 from my machine from the Mac machine should point to 3,000 on the container and if you change this to let's say 3001 or 3003 now if I go to 3,000 this won't work but if I go to 3,003 it does work because I've said this is my host machine Port this is my container Port any request that coming on the host machine on 3003 route it to the container on 3000 that's how you start an image how do I know that the image has started and a container is running if I run Docker PS in a different terminal it shows me this is the container ID that's running the image that it's running is test app this is the command that ran it and it was created this much time ago this time that it's been up all that just if I stop this I don't see anything when I run Docker PS that's the brief of uh the difference between Docker build which lets you create an image Docker run that lets you create a container do we Now understand the difference between images and containers Docker images still has a bunch of my images usually this keeps on getting cluttered because you keep creating images as time goes by pushing them to Docker Hub all that just pulling them from dockerhub but you don't usually have a bunch of them running locally the project that you're working on locally is what you have running locally whenever you move from that project to a different one you start a different container so you'll usually have a bunch of images on your machines all of these are like 1 GB 2GB 700 MB but whenever you're you whenever you actually want an app to run on your machine is when you run Docker run whatever only thing is there are a bunch of arguments that you can put here you just put one right now dasp which as you saw lets you do a port mapping there are a bunch of other arguments that we'll see later on cool let's move on and let's let's see how you can push your code to Docker Hub this is the part that you might be have been waiting for this like the fancy bit where we now have an image locally but I want to push it to dockerhub to one put it out there for everyone to see to be able to let my friends try my code base out the way to do that is fairly straightforward let's first look at the high level flow of how this happens after you have created the image locally you can sign up on dockerhub and very similar to how we saw the image there you can take your again not the image not the container but the image and you can push it onto dockerhub once you've pushed your image to Docker Hub your friends can pull it to their machine you can pull it to like an AWS instance a gcp instance as long as the AWS instance is running Docker or let's say you have a gcp instance which is also running Docker you can simply run a single command here which we will see very soon that will pull the image from dockerhub and run it on this gcp instance again this happen this is true if you push it to like a public repository you can create private repositories here which is like a whole different thing happy to discuss later on you don't really need it for a really long time because private well that's how things are actually deployed in the real world like you have private Registries where you push your images but like a very simple way to sort of move from a public to a private repository cool this is the high level of how you push images and pull them on an AWS machine and run them let's try to actually practically do it so step one is going to be creating an account on dockerhub I already have one but I would urge you to go to Docker hub.com and sign up/ signin once you do you will see an interface like this this is very similar to github's interface where you can create a repository I'm going to create a repository let's call it test during video and it's created like an empty repository similar to an empty repository in uh GitHub the important thing you need to do is you need to push code here the way to do that is first when you're building your image maybe tag it with the name of the repository which in my case is 100x devs SL test during video test during video was the name of my repository but whenever you're on Docker Hub you have to prefix your repos your repository is basically not just the name that you gave it it's your username slash the name that you gave it that's what I put here and if I press enter I basically rebuild the same thing now you'll see this happens like much faster than the first time because we already have like the node J image locally so it's not really repoing it we cover a bunch of these things in the advanced section when we cover layers but as you saw the first time it took 60 seconds this time it took much lesser now if I check my images I have a bunch of them but at the very top I have 100ex devs / test during video and I can simply do Docker push 100ex devs test during video and it will push my image from my local machine to dockerhub let's wait for some time there we go now if I go here and refresh it says there's one version here the with the latest tag that was created a few seconds ago and go to like the public View and this is something I can share with my friends and tell them he I pushed my image here feel free to pull it and run it locally which means all they have to do is run a single command Docker pull followed by Docker run and they'll be able to run everything on their machine without having nodejs installed on their machine they just need Docker and a single command Docker pull followed by Docker run they'll be able to run all of my code base in their machine they didn't have to install any it could be like a fresh machine which has just Docker in it let's try that on an AWS machine of mine so I'm going to copy this and then I'm going to SSH into an AWS machine of mine if you don't understand this part it's basically me logging into my AWS machine as you can see it's like a pretty empty AWS machine it just has like a Docker bot running for the cohort it does have Docker running though if I run Pudo Docker run hello- world as you can see it does say hello from Docker so it does have Docker already installed all I have to do is one uh pseudo Docker pull this why sudo because the way that I installed it on this specific uh machine it was not great it does not have the right permission which is why I need to run it with super user permissions sudo Docker pull the name of the repository it will by defa default get it from dockerhub again as I've mentioned before you can push it to uh an AWS registry or gcp registry it's just currently on dockerhub which is where we pull it from after you've pulled it all you have to do is run pseudo Docker sudo Docker run and the name of the repository which in our case is 100x devs Dash test during video if I run this it says example app running on local col 3000 the question is is it now deployed on the internet and the answer is yes let's go to the URL of this ec2 instance colon 3000 and I don't see anything can you tell me why take a break think about it why does this not have anything running here even though I've run the docker command you guys are smart enough to figure it out I'm going to stop this Docker container and rerun it with the you guest it P mapping now if I pr enter with the right uh image name if I go here I see the website so I can deploy it on the internet your friend can deploy it on their machine you can send this dockerhub link to anyone and it just works um you might be asking this is not secure I do agree there are ways to like make your website secure which like for example Google is or Twitter is um I go to Twitter it says secure I go to this hell World website it's not secure there like a whole different story for a different day um for now you are able to create an image deploy it on dockerhub and you able to pull it on an ec2 machine so basically everything that's mentioned here is is done let's proceed these are the five commands that you should understand by now building an image running a container Docker login is one that I missed um so you can't just push to my repository I just randomly ran like Docker push here and it just worked the reason is before the video I did like a Docker login here that is something you need to do after you have signed up to dockerhub when you do a Docker login it will ask you for like your username and pass just give it to it and once you do it has Docker CLI basically has access to your Docker Hub so it can push images there as you can see it says logging in with your password grants for better security logging with limited Privileges and you can like create these things called access tokens somewhere here in Docker Hub and once you do you basically like access tokens you can restrict a specific access token to a specific repository you can give like specific access to this talker CLI I just given like complete access F to do that if you're like testing it out if you're when you're working actually in production each repository has like an access token associated with it and these deployments sort of happen via GitHub GitHub actions um and that GitHub action only has like access to that access key but like feel free to ignore all of this for now going back to the slides so Docker build Docker run Docker login Docker push and Docker pull these are the five commands that you should understand hopefully by now and let's inspect one more thing the build times really quickly how much time did it take for the image to build when I ran Docker build if you might remember it took almost like more than 60 seconds to do 1x4 which was uh pulling the image from dockerhub into my Seine right uh there were a bunch of other steps after it um but if I ran Docker build again as you saw did not take as much time it only took like 5 point 7 seconds or 3 seconds because it already had the image locally the not just 20 image so it sort of understands remembers it somehow how we will see in the next video but one thing to know is like Docker is pretty smart in what's called caching and it uses something called layers to Cache again something we will discuss in the next video but I have to leave you guys with an assignment which is I wanted to discuss this these build times sort of go down as you do recurrent builds you do it the first time it will take really long time second time it won't take as long and as I said there are like four things that are being built at the very high level one you get like the docker image that's this 1x4 here two the working directory three copying over codebase and four running npm install there is one small small problem in the way we've sort of created the docker file I would make one small change um if this was like a production Docker file I'm not going to share that change but just saying that's one small change that needs to be made here is something you need to figure out as the assignment and I'll do something that will make Docker builds faster um what that is I'm going to leave as an assignment but I'll give you a hint as to what I'm expecting but there's a small problem in this Docker file the problem is if I ever change my index.js file this is my source code if I ever change my index.js file what will happen is this step will change copy dot dot now copies over something new if I run Docker build once with a certain index.js file all these commands run from the top to bottom I change the index.js file I run Docker build again when Docker is building for the second time it will see oh no justs 20 it's the same as before I have cashed it I'll proceed very quickly over work/ user s/ app I have cashed this I will proceed very quickly when it will reach this step is when it will be like oh sorry initially index.js was different now index.js is different which means this step onwards index.js will sort of change which means if you look at these steps here 1x4 will be the if I if I've only changed one file in my source code 1 by4 will be the same which means uh the nodejs image still the same 2.4 will be the same 3.4 may not be the same because I'm copying over a new Javascript file which means this will basically not be C cached as you can see it says cached here and again npm install will not be cached the reason for that is whenever you change these are what's called layers in Docker which is something we'll discuss in the next video but at high level all of these are like layer one layer two layer three layer four this is how Docker builds one layer on top of another and once you've created this layer um if you build on top of it if any layer sort of changes from the build before for example if I if I did Docker build once I would get like an image a layer one here a layer two here a layer three here a layer 4 here and these are what you see here as well you see um 1x4 2x4 3x4 4x4 if I run Docker build again without changing anything Docker will be like I already know this I've done this once it'll just use these cached images that is exactly what you see here when it says cach cach cach when I ran Docker build a second time it said bro layer one is Cash Layer Two is Cash Layer Three is Cash layer four is cased but if I changed index.js file between two builds what would happen is layer three would be different it would no longer be cached and the problem is if layer three is not cached anything beyond that is not cached because the file system has changed now so it can't really you know cach the fourth layer when the third GE is uncashed so everything from there is uncashed and this is a problem because npm install is a very sort of expensive step this step right here is very expensive takes a really long time and you would ideally want npm installed to be cached if only the only file that has changed is index.js if between two Docker builds only index.js is the file that's changed which means external dependencies did not change package.json which contains your dependencies does not change um then you you shouldn't really have to run npm install so there should be some way to move npm install up somehow so that this this step remains casted if the only file that the end user changed is index.js the reason I'm leveraging so heavily on this is because when you're in a full stack application package. gon doesn't change very quickly this specific file that you see here you don't really add too many dependencies very quickly uh which basically means this npm install step should be cached because as as you change index.js or your fullstack application you don't change black. Json too often so there's no need for this to be uncached or basically this to break the image caching what if there was a way to move this up the chain somehow is what you need to figure out that's all the hint I will give you will discuss this in more detail in the next video but at high level that's sort of assignment number one okay every step in a Docker file is a layer what does lir mean basically means this is how Docker sort of creates um an image when you tell Docker create an image it will first create layer one which will be the first step then it create Layer Two the Second Step third step fourth step the number of steps equal to the number of layers if you ever change let's say the very thing at the very top by I moved from node 20 to node 19 what would happen it would mean layer one changed which would mean every layer after that also changed which would mean if I go back a slide yeah it should mean okay now everything is uncashed bro uh you changed node from 19 to 20 to 19 this layer will rep of node js1 19 which will take almost 2 minutes uh this layer will also be non-cached now nothing is cached after that after a layer loses its caching every layer that comes after it is also no longer cash it's being rebuilt from scratch this makes sense if this is the thing that you're changing but if the thing that you're changing is this thing then layer one and Layer Two can still be cashed only layer three and layer four are what need to be uncached and what need to rebuild but as I mentioned there is a way to optimize this a little bit by somehow moving NM install up the chain and a few files down the chain which is something you need to figure out that is the next assignment what is assignment number one figure out a way to make Docker builds faster if you know you are in a full stack application and I tell you can package. deson does not change very heavily and package.json is what governs how this specific command will run if package.json changes then the output of npm install changes if package.json is same across two builds then npm install can be the same which means if somehow I could put push this up the chain and if only thing that is changing through a full stack applications life cycle is index.js file some of this can push up the chain we can like save some build time because we can somehow cach this this this is the step that you need to understand how to cach that's all the hint I'll give and second assignment is this image is like pretty heavy as you saw like it was almost a 1GB image figure out a way to find a more lightweight image and run nodejs application uh and there is a way to run this specific no application in just like 110 MBS so figure out how you can do that what you have to change exactly to to be able to do that cool these are the two assignments Solutions are on GitHub already but I'd urge you to not look at them right now cool quick recap of what we've learned we've learned what is containerization the history of Docker how to install Docker on your local machine and play with it um what are containers what are images what is the difference between them creating a simple full stack application and creating the docker file for it containerizing the back end deploying the back into Docker Hub pulling the docker Hub image to your AWS machine and starting it a small flaw inner approach which is assignment number one and again assignment number two bonuses caching and layers in Docker which will discuss a little bit what are layers we will discuss that more in the next video the next video we will start with what are layers and what is caching volumes and networks multi-stage builds understanding what is Docker compose and adding a database to our back end so some more backend code we'll write and then creating a Docker compose file for a project and going through some open source projects and making sure we understand everything there cool let me know how was the first part of this full stack not full stack 100x devs YouTube boot Cam and as I said it is this video then it is next video and then the coast is clear let me know what you guys want to learn next and we're happy to do it with that let's end it I'll see you guys in the next one bye-bye
Info
Channel: Harkirat Singh
Views: 136,363
Rating: undefined out of 5
Keywords:
Id: fSmLiOMp2qI
Channel Id: undefined
Length: 77min 4sec (4624 seconds)
Published: Fri Oct 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.