Golang Microservices: Using Docker for Containerization

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my name is mario welcome to another video in today's episode i will share with you another tip for building micro services in go specifically containerization using docker how to build your image how to use docker compose to run that image for local development and how to push that image to docker hub so what is docker docker is a tool designed for making for making it easier to create deploy and run applications using containers is it's not only a toolbar it's a collection of different tools that allows you to create a package an artifact that can contain the things that you're supposed to be deploying think of what we're having what we're doing right now we're building a web service that happens to be a binary that may include some artifacts or may include well in this case it's not including them because it's already embedded using the 1.16 embed feature but but think that that it may require some certificate for example things like that so basically what is docker supposed to be doing is wrapping all of those dependencies into one single artifact that can be shared between different services now at the moment the most popular registries or the way that you can push your your images too which is what the docker calls these artifacts are docker hub elastic amazon elastic container registry or ecr and google cloud container register the idea of those services and you can literally join run your own if you want to like on your own on premise but the idea of having this is that these are sort of like third party providers that allow you to host your own images so you can refer to them when you're running either your continuous development or your continuous integration things or maybe when you're running your actual your actual services that happen to be dependent on docker images now the cool thing about docker is that because it's a self-contained artifact you can use this tool called kubernetes and it's literally to allow you to run orchestration and an easier way to to configure all the different things that you can do with the services that you can define using the different images at the moment the most popular ones for kubernetes are sort of like not kubernetes only for but also for a way to have some sort of like images and containers and a way to have those auto balance is um google gke which is google kubernetes engine eces which is amazon elastic container service and e eks which is amazon kubernetes service and obviously because this kubernetes is an open source project you can run those locally uh not locally on-premise if you want to now i will give you use a sort of um like a quick uh introduction to docker and the way to use docker official first of all you need to install it and if you haven't used docker before i will put a link in the description the way it works is it's not that difficult as soon as you you kind of get an idea of what is supposed to be doing and just think of a container that is wrapping everything that you need for running your application okay artifacts dependencies binaries assets uh all those kind of things now above that you will be adding things that they they that happen to be using that artifact like for example configuration about how much cpu is using how much memory perhaps uh what the what is the network configuration that is supposed to be using uh the auto balancing how much instances they supposed to be running and what is the the security that you're going to be applying to those those containers those kind of things the way it works is that after you download the depending on the platform that you're using it will give you different commands so i have a i've been using a few different images and during the pro you know the previous videos i've been showing you how to run postgres of vault or jk or prometheus and even reddish and mkhd so the idea is that you can run at an a container that you can it can be any version of any service or every program that you want to be that you want to have and then after a while if you want to get rid of it you just literally just remove it there is an option of actually saving the actual volume or the data that is supposed to be saving or interacting with but the whole point of defining containers is that because it's super easy to create those containers you can literally have the opportunity to run multiple different servers of different different versions of those servers at the same time using different a ports and whatnot so that's one of the things that i do like about docker and docker is not the only one the first one doing this this or sort of handling um containers or taking care of containers but this is the first tool that made it sort of popular and made it easy for everybody to use it so that's why i became such a such an important project that nowadays all the cloud providers use for for sort of orchestrating and and and creating depending on the traffic and those configurations creating different services and you know auto scaling and those kind of things so understand docker just literally in my opinion requires a probably a few different commands or a few different options which will be creating an image running an image understanding what is a docker i'm sorry understanding what is a container i want to understand exactly what is a an image so the idea of an image is think so think of an artifact so if i do the command docker images after you install this you will notice that i have a few different images right here locally on my computer on my machine and if i do like a ps which is it will be equivalent to do like a ps of of your local command if you're using unix or in this case i'm using mac os but if you use linux it will be sort of the same is that all the all the processes that are running and is sort of the same it is equivalent to to that sort of command so you have images that are representing sort of programs but really they are not programs there are sort of artifacts that represent a collection of programs that run in in docker that are wrapped around the docker container and then the container itself like it says an artifact and the image i'm sorry the container is the actual thing that is running and the image is the actual artifact so when you're running different containers like for example i have a few containers a stop right here which i'm running postgres i'm running another postgres right here and i'm running a radius right so the idea is that you can you're because because you have the flexibility of running of running different services you can run those services or those servers and as many as you want as many as you need and the cool thing is that because you can define multiple portals you can stop them as you need you can literally use multiple versions of the same kind of service at the moment i'm using 12.5 for postgres but imagine if i'm using 11 or maybe 9 or those kind of things i don't really have to compile all of that locally for running that service i mean i know that you can do it you can specify the different paths and you can do a different ld flags and those kind of things i mean there is an option to do that locally already but the cool thing is that this is the when you're using docker things are much more easier if there is a flexibility when you're trying to use docker now i'm giving you all of this because what i'm going to be doing next is i'm going to be discussing with you how the actual docker image or rather the docker file was built and this is one of the cool things about docker is that it allows you to specify different layers and the way it works is that depending on how you write your docker and your docker file rather it will perform in different ways it's sort of like each one of these lines indicates sort of like a layer and it it works or is caching each one of the layers so if you make a change and the previous layers didn't change and you had those layers locally on your system those layers will not be run one more time it's sort of like a caching mechanism that docker has and it allows you to speed up the build of those images specifically what i'm trying to do here for this project is to create a small artifact a small image that happens to contain only two files and all the files that are needed for using those files specifically what i'm trying to do here i'm creating i'm building and installing well in this case i'm compiling the migrate binary for running the migrations and i'm compiling the rest server binary for running the http server as well as copying over to the final image the example file for the environment environment variables configuration and the dbe folder which includes all the db migrations the idea of this is that as soon as i have this available i can run this on any machine that i want to run it that happens to be supporting docker so the cool thing besides all of that is that a everything is self-contained okay now if i look at the docker compose and the docker compose is sort of a way to orchestrate different containers that happen to be referring to different images and allow you to they allow you to run all of them at the same time sort of like a like a self-contained collection of docker containers and programs that happen to be depending on each other if needed and the cool thing about this is that the way is a structure at the moment and the way the configuration exists is that you can specify hey i have this service called api that depends on the following four services and i've been covering all of these like prometheus vault postgres and jager in the previous episodes and you can feel free to check out all of that the only problem with the way docker compose works is that when i'm running docker compose if i do an up it is doesn't although i define uh depends on right here it actually doesn't really wait for the docker to be healthy uh and and the reason being is that healthy depends absolutely on the actual container itself there is no way there is no standardization of what being healthy means for a database it could be like okay it's listening on port i don't know 54 32 or maybe my sequel is 3306 or maybe kafka is something else and and this is especially specifying the documentation for docker and it's something that you need to consider and i'm calling that out because if you notice right here what happened is that because it takes some time to run my database my api server or my my http server is going to fail because the actual database is not ready yet so that's why there is a way to um somehow interact with those with this group of containers of this group of services uh while you're working or they're trying to run them so that's why i'm at the same time i'm running you know i'm saying you know what let me try to run api one more time so i'm calling this out again i'm repeating what it says just now but i'm saying this again because what is happening is that docker specifically knows this and there is no way around it unless you specify or you add a program that is actually listening for those events or for those servers that need to wait or need to run before that the ones that depend on uh are available so if i go back to my service you will notice that if i run to my uh everything is available everything is running so i have my rest server is during jager postgres prometheus and vault so everything is supposed to be running if i copy this over and if i go to my server you will notice that it's failing obviously because that's not the path but if you remember when we were doing the swagger ui video everything is here uh okay so this is a problem i made a mistake with the actual address it should be one to name one 127.001 because the way is configured in the open api uh configuration is actually referring to this address not the one zero zero zero so so forth so what is happening right here is that again remember that we are sort of using docker containers for all of this okay so this is self-contained none of these didn't none of this existed before we actually run the docker compose up command so everything is brand new if i try to run uh if i try to create a new record what is going to happen is that it's going to fail and it's going to fail because the actual table does not exist remember when we're running this locally what is happening is that we're on the docker the postgresql server we run the migrations we run the vault and and prometheus and jager everything manually but when we when we're using docker compose none of that is explicitly indicated unless we tell it to do it so the way to fix it and i have it in the actual docker compose is that i have this instructions that it literally does is run it runs the api which is this service but instead of using the the command that i write that i have right here i'm actually using the migrate command that if you were curious and you were paying attention it's the one that actually i copied that that binary over from the one above it so i'm actually in this actual artifact i'm copying i'm including the rest server i'm including the migrate and including the migration as well as the example the environment example for configuration values so all of that is just to sort of like a make software development or writer the testing a little bit easier so if i run this again it will try to run the migrations and it did it succeeded so if i go back to my create and execute the same you'll notice that it allows you to create the the the post that it was working for so all of this is to confirm that everything is working as expected and and we are okay with that okay so now that now that we know that all the images are correctly created i we know that the docker compositions will be working and the next step will be to define uh have a way to publish all these data into a registry and this registry could be either what i mentioned before either docker hub or maybe google doc registry or maybe amazon container registry is sort of like a way to uh to share all the images that we have with our users either it could be like maybe a cicd pipeline that we have or maybe it's the actual services that we run in production or maybe on different environments and for doing that what i'm going to be doing this time i'm going to be using docker hub the way docker hub works is that a you can log in and you can connect your github account to this specific to this specific project and you create can create a repository the way it works is that let's say i want to put to do microservice example and then i already linked my github account to my docker hub account what i'm going to be creating and after after i'm doing this the way it's going to be working is that if i go to um build automated bills and then link to github i can actually specify the project that i want to link it too so if i go to to do microservice example i can actually specify you know what branch i want to use i want to use the branch main and if you look at my path i have it under the rest server so it will be built rest server docker file so you can put docker tag will be latest and docker file i mean build reservoir docker file and then i can do save and build so if i go and push this one and if i go to main and then rebase and then i push this one what is going to happen is that it's going to actually a trigger it's already queued up so it's going to wait a little bit and then it's going to continue continue doing its thing it takes about the first time and it usually takes about maybe six to ten minutes to complete so so we're going to be here for a while so one thing i noticed is because i added the repository before pushing the new branch to main it actually triggered two two jobs so i canceled the other one before but still it seems to be trying to build it it's still trying to build it but anyway it doesn't matter what is going to happen here is that again all of all of these will be the link will be in the description so you can check it out what's going to happen is that it's going to um clone the repository it's going to build the thing and it's going to push the image to this docker hub repository now all of this is just a a way to give you what could happen when you work on a project whatever that is so if you're working on on on on microservices on a new project it's most likely we're going to be using again it's not all the time it doesn't happen all the time but it's most likely we're going to be defining a we're going to be using a cloud provider either amazon google or usher and it's most likely that we're going to be using a service that happens to be the requiring us to define docker images so that's why it's really important to understand how this docker image is things things work and it it will give you more links in the description so you can check them out and hopefully i mean this was sort of like a crash course of how to use uh docker images uh the way that you can connect those to docker compose for running local development and also how you can push those docker images to a service that you can use for ci cd or maybe for running or pulling the images from for running them on actual production services hopefully that makes sense and and if not just let me know i would like to expand more of these topics in in the near future if you have any more questions my name is mario and i will talk to you next time until then take care see you
Info
Channel: Mario Carrion
Views: 1,492
Rating: undefined out of 5
Keywords: golang, microservices, golang microservices, golang web development, golang tutorial, building microservices golang, golang microservices tutorial, golang opentelemetry tutorial, golang observability, docker-compose golang, dockerhub golang, golang eks, golang ecr, golang docker registry, golang container, golang containerization docker, docker compose golang, docker build golang, golang docker tutorial, golang docker compose tutorial, golang beginners tutorial
Id: u_ayzie9pAQ
Channel Id: undefined
Length: 19min 40sec (1180 seconds)
Published: Fri Apr 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.