DevOps with GitLab CI Course - Build Pipelines and Deploy to AWS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this course valentine will teach you how to use gitlab ci to build ci cd pipelines to build and deploy software to aws hello frequent campers and welcome to this course which will introduce you to gitlab ci and devops my name is valentine i'm a software developer and i like to share my passion for technology with others in a way that is easy to understand when i'm not speaking at a conference or traveling the world i like to share what i know by being active in groups and forums and creating tutorials on youtube or online courses in this course we will understand what gitlab ci is and why we need this tool and start building cicd pipelines during the course we'll create a pipeline that takes a simple website builds a container tests it and deploys it to the amazon web services cloud also called aws in other words we'll be focusing on automation i've created this course for people new to devops who want to use gitlab to build test and deploy their software don't worry if none of this makes sense right now if you're a beginner that is totally fine you don't need to install any tools or anything else also no coding knowledge is required but if you have some it is great it will help a bit i will explain to you everything you need to know step by step this course focuses on gitlab ci but the course notes are packed with resources i recommend exploring if unfamiliar with a specific topic go right now to the video description and open the course notes there you will find important resources and troubleshooting tips if something goes wrong i will also be publishing there any corrections additions and modifications yes this is a very dynamic industry and things change all the time so if something is not working first check the course notes i am a big fan of learning by doing and you will get hands-on experience building pipelines and deploying software to aws throughout the course i will give you assignments to practice what you have learned well in this course we'll be focusing on a specific technology and cloud provider what you're actually learning are the concepts around devops with the skills acquired in this course i'm sure you'll be able to start using gitlab ci for whatever you need in no time at all this is an action-packed course which i'm sure will keep you busy at least for a few days as always here on freecodecamp please help us make such courses available to you by liking and subscribing also don't forget to drop a comment below the video if you like this course i invite you to check out and subscribe to my youtube channel link in the video description which is packed with content around devops software development and testing also feel free to connect with me on social media i would really love to hear from you finally i would like to thank those who will support free code camp by clicking that thanks button and making a small donation i hope you're excited to learn more about gitlab ci aws and devops and with that being said let's get started i have designed this course to be as easy to follow along as possible you don't need to install any software on your computer and you should be able to do everything just from a browser i'll be using gitlab.com throughout the course if you don't have a gitlab.com account please go ahead and create one by default you will get a free trial with your account which will be downgraded to a free one after 30 days it just takes a few steps to create an account if you don't want to participate in the free trial that's totally fine there's also the possibility of skipping the trial altogether gitlab is a platform that offers git repositories where we store code and code pipelines which help us build software projects now that the registration process is completed let's begin by creating our first project and we're going to create a blank project and i'm going to call this project my first pipeline i have the option of providing a project subscription which is optional and i can also decide on the project visibility either private which means that only i or people who i explicitly grant access to can view this project or public which means that can be viewed by anyone without any authentication i'm not going to initialize this project with a readme file i'm going to simply go ahead and click create project this gitlab project allows us to store files and to use git to keep track of changes and also to collaborate with others on this project if the concepts around git are not clear check the course notes for a free course on getting started with git for gitlab since for this account i haven't configured git you will also get here this warning in regards to adding the ssh key this is also covered in the material i have mentioned but for the moment we don't need this so we can simply discard it the first thing that i like to do is to change a few settings in regards to how this interface looks like so from the user profile i will go to preferences and here from the syntax highlighting theme what i like to do is to select monokai so this is essentially a dark theme and as you probably know we like to use dark themes because light attracts bugs and we definitely don't want any bugs now leaving the joker side some people like it some people don't like it i prefer to use a dark theme when writing code but totally agree that depends on everyone's preference on how to use this there are also some other settings i want you to do right now in the beginning i'm going to scroll here a bit further down and the first thing that i want you to enable is render white space characters in web ide this will show us any white space characters whenever editing files that's super important okay we have everything that we need to do so i'm gonna go all the way to the bottom click on save changes and go back to gitlab you'll see here your projects so currently you have only one project i'm going to click on it and currently we have absolutely no code inside this project the first thing that i want to do is to begin to create a file so from this new gitlab project we're going to use the web ide to create the pipeline definition file i'm going to click here on new file this will open up the gitlab ide so i'm going to create a new file and we are already provided here with a template and this file name must be called dot gitlab dash ci dot yaml if it's not exactly this name if you're writing it on your own this pipeline will not be recognized so the pipeline find will not be recognized by gitlab this is a very common mistake that beginners make probably this is why if you click directly on this one you'll be pretty sure that you don't name this file like anything else now in this file we're gonna define the pipelines and essentially gonna write here configuration to create pipelines in gitlab say it's totally fine if you don't know what that is right now just want to create a very simple example to make sure that we have everything we need in order to follow along with the rest of the course so what i'm going to do here i'm going to write something like test column and then i'm going to go to the next line and you will see here everything is already indented so with this indentation with this four spaces that you can see right now here i'm gonna write script column space and then i'm going to use the echo command to display a text this is going to be the text that we want to display the echo command is used to display a text and we'll be able to see it later on and this dot gitlab ci that yaml file allows us to describe our pipeline as i said don't worry if this does not make any sense right now this is just a test to ensure we have everything we need to get started so now what we're gonna do is to actually commit these changes which means introducing this file inside the gitlab repository if we click again here on the project name we'll exit this view and you will see here that the pipeline failed so every time we make changes to this project the pipeline will be executed based what we have defined in this yaml file but you will see here right on top there's this indication that something is wrong and even here if you look inside the pipeline you will get this additional information so what is going on in order to run your pipeline using the gitlab.com infrastructure you need to verify your account now this is not the same as verifying your email address you already did that but you need to go an additional step of verification unfortunately some people decided to take advantage of this free service and have abused it for mining cryptocurrency for the time being you will be asked to verify your account using a credit card now your credit card will not be charged or stored by gitlab and it is used solely for verifying your account just to ensure that you are one of the good guys and i know that you're one of the good guys and i know that this is a bit annoying in the beginning but this is how things are right now i also know that credit cards are not that widespread in some countries and this may be inconvenient maybe you can ask a friend to help out i hope that gitlab will introduce alternative verification options nevertheless verifying your gitlab.com account and using the gitlab.com infrastructure is the easiest way to follow along with the course so if you can invest five minutes now and get this done it will save you hours later you can use your own infrastructure to run gitlab but it is more complex and from my experience of training thousands of students is that people new to gitlab who use their own infrastructure have issues running their pipelines and waste a lot of time trying to get them to run properly you have been warned but if you want to go this path i've added some resources to the course notes which you can find in the video description now in order to get started with the verification process you have to click here on validate account you will be asked to enter your credit card information i hope that the validation has been okay and in order to see if everything is working properly i'm gonna go back to the project and open the web ide i'm gonna click on the get lab ci file to make a change to it and i'm gonna change here the message so this is gonna be now hello world 2 and going to go here commit so essentially making a new change to this file and commit it directly into the main branch and if i'm looking here at the bottom i should be able to see that something is happening so very soon a pipeline will be started and you will see here now pipeline with the specific number is running i can click on it and if you click here on this execution you'll be able to see the job logs and what we're interested in is firstly seeing this message here hello world and additionally what we're also interested in seeing here is seeing this text here which says pulling docker image now this is very important not for this job itself but what we are going to do throughout the course we want to make sure that whatever we have here in terms of the execution that we are actually using docker if this is working it's fantastic you can jump directly into the next lecture otherwise check the course notes for some troubleshooting ideas so what is a pipeline allow me to make an analogy to an assembly line used to manufacture a physical product any product goes through a series of steps let's take a laptop for example to oversimplify the assembly line would have the following steps we'll take an empty laptop body case we'll add the main board and add a keyboard we would do some quality assurance to ensure it turns on and works properly then we would put it in a box and finally ship it to customers and we are not using gitlab to produce physical products we want to build software but producing software has similarities to what i have described before before we can ship anything we go through a series of steps let's try to build the laptop assembly line in gitlab ci well kind of instead of real components we'll use a file a folder and some text so let's begin with the first task take an empty laptop case we'll put this in a job in gitlab a job is a set of commands we want to execute so now let's go back to our project and make some changes to it again i'm gonna open the web ide to be able to view the pipeline file now essentially we already have a job here but we're gonna expand this and make it build a laptop if you're facing any issues getting the following steps to run make sure to watch the next lesson where i'm going over some of the most common mistakes now we already have a job and this job is called test but probably we should go ahead and rename this to maybe build a laptop or we can also call it simply build laptop now in the script part this is where we can essentially write commands now so far we have used this echo command but the way we written this allows us only to write one command so what i'm going to do here i'm going to go on the next line and i'm going to start here with the dash and this will allow us to essentially write multiple commands one after the other so let's rename this in something like building a laptop just to give us some information in regards to what we're trying to do i want you to notice that after this dash i have added a space and you can see that space represented by a dot common mistake that people just getting started with gitlab and this language that you see here which is called yamo is that they don't add the spaces or they don't properly indent what they see so make sure that what you have inside your editor looks pretty much the same as what i have here and you're doing that pretty sure this example will also work for you now as i said the language that we're using here to describe our pipeline is called yaml and yaml is essentially a way to represent key value pairs it may look a bit weird in the beginning but after a few examples i'm sure you will get used to it now so far our problem doesn't do anything and inside this job we actually said we want to build this laptop and we also said that we're not going to use physical components we're going to use folders we're going to use files we're going to use some text so let's begin with the first command which will create a new folder now we want to put this into a folder which is called build so on the next line i'm gonna use a command that will create a folder this command is called make deer and i'm going to call this folder build so now we have a folder and let's go to the next line and actually create a file so i want my file to be inside this build folder and in order to create this file we're going to use the touch command now the touch command is generally used for modifying the timestamp of a file essentially you're touching it you're modifying it but it also has an interesting behavior which we will use here if the file does not exist it will create an empty file and initially for us that's perfectly fine so we're going to create this file inside the build folder which we created one step before and with the forward slash we go inside this build folder we can specify the file name which in this case will be computer.txt now the next question is how do we get some text inside this file and in order to do that we're going to use a command we have already used before and this is the echo command now echo if we don't do anything else like we did here it will just print this message and we'll see it in our build logs but we can also kind of redirect this and send it directly to a file and for that we're going to use an operator so let me show you what i mean by that i'm going to write here again echo and the first thing that we are going to do is we're going to add here the main board now let's see the file itself it's uh just containing the the laptop and then we're adding components to it so if you would keep this as it is right now it will just display this information just display mainboard but we actually want to get it inside this file that you see here so i'm simply going to go ahead and copy it and in order to get this text inside there we're going to use this operator it's actually greater than greater than and essentially this operator takes the output from one command so echo is one command and it will append it to a specified file so now we have specified here this computer.txt file which is inside the build folder so if you want to take a look and see exactly what is the contents of the file we can do that as well and for that we there are different commands that we can use one option would be to use the cat command and cat stands for concatenate and can be used for creating displaying or modifying the contents of files in this case we're using this hat command to view the contents of a file and again we have to specify the path to that file so just to make sure i don't make any mistakes and always go ahead and copy paste the value of the file the name of the file and of course we can also go ahead and add the other steps so which other steps did we had for example we wanted to add a keyboard i'm gonna add this again with a new command and i think that should be it's a way we have the main board we have the keyboard and of course we can also try this cat command once again at the end and this will give us an idea of how the job itself is working in the previous execution we haven't specified a docker image to use and by default it will download a ruby docker image which for our use case here doesn't really make a lot of sense so we're going to use a very simple docker image it's a linux distribution that will be started with our job and we can do that by specifying here another keyword and you will see here it's under the job itself which is build laptop but at the same level with the script so i'm going to write here image column and the image name will be alpine so alpine linux is a very lightweight linux distribution what's most important for us that this distribution has these commands that we're using so these are pretty standard commands that are available essentially in any linux distribution but having a very small linux distribution will make our job run much faster so let's go ahead commit these changes and see how the pipeline runs i'm going to commit again to the main branch and if i'm in patient i can click directly here on the pipeline or i can also go directly on the project page and you will only see here the pipeline running pipeline is also available if you go here inside cicd pipelines it will display your list of all the pipelines that are running you can click here on this one which is the id of the pipeline you will see now our pipeline contains a job and this job has been assigned to the stage test by default if no stage is defined the job will use the test stage now it doesn't really matter how the name of the stage has been called we just want to make sure that the commands that we have executed are actually working properly we're getting here no errors so the job is running so the job has executed successfully we can click on it take a look at the execution logs see exactly what has happened here and you will see here in the logs the commands that we have executed they're all visible here you will see here echo building laptop you will see the text being displayed after that we are creating a new folder we're putting a file inside that folder we're adding some text to that file then we're checking the contents of the file to make sure everything is fine you see here the word mainboard being displayed and then we add the keyboard and then again you will see here the contents of the file containing both main board and keyboard so what is a pipeline it is a set of jobs organized in stages right now we have a single job that belongs to the test stage however in the upcoming lectures we'll be expanding this pipeline it is quite common that when you are just getting started with defining these pipelines that you make some mistakes let me show you like some common mistakes that will lead to invalid configuration which will lead to git lab not running your pipeline i'm going to make this a bit bigger on the screen so that you can easily check and compare it with what you have it's very important that in some places you have this column so for example here i'm defining the job and in order to add image and scripts as essentially properties of this job it's important to have here this column if i'm removing that you will see here in the editor when you see these lines here it will indicate that something is wrong most of the time these messages are really not so easy to understand so it's more important to double check what you have written here to make sure it is exactly as it should be it's also important that you have some spaces where there are spaces expected so for example here with the commands and whenever you're writing echo or make deer or touch there needs to be a space between the dash and the command this is why in the beginning i've asked you to enable this white spaces so that you can easily see them in your script so if i write something like this this again will make something weird so it will not show here an error because this is actually valid yamo but when gitlab will try to execute this pipeline and we'll look at these commands we'll think that you're trying to run a command that starts with dash echo and we'll say i cannot find this command so for that reason you need a space here to make a difference between this dash here which indicates a list and this command what's also important is the indentation as you can notice here there are four spaces so everything that is under build laptop is indented with the level if i add something like this here it it will no longer belong to build laptop and will come out as something that's really weird and most likely it will be invalid so always make sure that you have the right indentation uh two spaces are also fine by default this web ide will use four spaces just make sure you have that indentation there in place in the course notes you will find even more examples of why your pipeline may be failing including some common error codes that you may encounter so definitely check the course notes if you're still having issues getting this pipeline to run in this lesson we'll try to understand what yaml is if you already know yaml feel free to skip this as i will not be covering any advanced features the most important reason why you need to know some yaml basics is because you'll be facing many errors while writing yaml and as i mentioned in the beginning this is normal trust me i also did the same mistakes that you did and sometimes i'm still making mistakes while writing yaml if i'm not paying attention if you've already been exposed to formats such as json or xml i'm sure you'll be able to understand the basics around yaml very easily actually yaml is a superset of json and you can easily convert json to yaml both xml json and yaml are human readable data interchange formats while json is very easy to generate and parse typically yaml is considered easier to read as you will see not so easy to write but we'll get to that quite often yaml is being used for storing configuration especially if you're learning devops you probably face yaml a lot and this is exactly why we're using it in gitlab as well to store a pipeline configuration at its core yaml follows key value storage principles and let me open up here inside the editor a new file and i'm gonna start writing some yaml basics so i'm gonna create here file i'm gonna call it test.yaml and let's start with a few basics so for example how do we represent a key value pair so for example i have here a name typically would write something like the name is john right so this is something that anyone would understand now in yaml we use a column to separate that and so i'm going to add here name column john and what's also more important that we have a space after this column so it will be like this and you will see here the color here will also change so now we have a key value pair the key is name and the value is john we also call this a mapping we're mapping a key with a value of course on a new line we can start adding an additional key value pair for example let's say here h 23 and what's important to know is that the order of the properties does not matter so we can define them in any order so if i write first name john and after that h or the other way around that's actually the same both are defined here in this yamo quite often we need to define lists which in yaml are called sequences and we do that by writing each value on a new line so for example let's write here some hobbies call them sports youtube and hiking right and what you also do is we put them each of them on a new line but each line must start with a dash and a space so we have here a dash and a space a dash and a space dash and a space now we have a list now the way i've written this doesn't really work like that so if we remove everything from here this will be probably a valid list but we cannot simply just combine this properties and now just add this list here in between what's important to know about yaml is that it uses indentation for scope essentially it allows you to know which value belongs to which key because it has the proper indentation so for example now we know this is not valid here but if you write here for example hobbies as a key then this list here will belong to this key so essentially the mapping will contain a key and this list of values well this is valid we also like to indent this i'm going to select everything and click on tab and this will indent all these values here additionally we can have some nested structures so for example if i'm trying to write here an address so this will be the key and then on a new line i can write additional key value pairs which all belong to the address and you will see here i have the indentation so i can write something like street and on a new line i can go ahead and write a city and again on a new line i'm gonna write here the zip code and essentially by using this indentation we show that street city and zip they all belong to address if we didn't have this indentation and it would all be a property of something else this is why in this case we're using here name age hobbies and address these are like properties on the first level and street city and zip they are under address in terms of lists we can also build some more advanced lists so this is a very simple list that we've used for hobbies but let's say for example that we want to add here a new key which is called experience and here uh for the experience we're gonna create a list and we just wanna say like what is the professional experience of this person for example we can have here job title that could be junior developer and instead of creating here a new item in the list we just go to the same level as with title then we can start writing something like period so it will indicate in which period this person has been a junior developer let's say from 2000 to 2005. and then we can go ahead and add a new item to the list again with title let's say now this person is a senior developer and we can also define a different period so this will be since 2005. now additionally what we can do is we can take everything that we have here and how about we define here let's say this is a person right and then we can take every everything that we have here and indent it once and then all these properties that we have here will belong to a person now you can see here that we are still have some mistakes here so we can we need to fix something because this editor will tell us this is a bad indentation and as you learn the indentation is very very important so i'm going to add here to spaces and add here again to spaces and now this indentation will be correct now when we are writing gitlab ci pipelines we don't really get to choose which data structures to use we have the liberty of designing the pipeline but we need to do it in a way that gitlab will understand it so if we're looking here at our pipeline we can just decide to use something else so for example instead of image we cannot just decide we're going to call it docker image or docker or instead of scripts we cannot write here scripts because this is something that gitlab will not understand yes from a yaml perspective this is still valid but from the perspective of gitlab this is something that's not according to what we agreed essentially in advance when we started writing these pipelines so it's just important that you understand like what we're doing here and how this indentation and how this key value pairs are being written but when we're writing jobs we we decide how this job will be named we decide what exactly will happen inside it some keys will be reserved and simply cannot be renamed you just have to pay attention that what you're writing here is really important and it has to be exactly at least exactly as the examples that i'm showing you otherwise gitlab will not be able to understand what you're seeing throughout the course we'll be using linux and linux commands we have already learned some commands such as echo touch make deer cat and so on we typically use these commands through a command line interface or cli what you see here for example i can go ahead and create here a new folder i can switch inside that folder we'll see here for example with ls i will be able to list the contents of that folder let's go ahead and write here touch this touch command and create here file name computer and again with ls i can see which files are inside the current folder so we typically type these commands that you have seen in the pipeline through this command line interface or cli sometimes we call it a console a terminal or a shell while technically speaking they are not the same thing you may notice me and others use them interchangeably a command line interface is the opposite of a graphical interface this is a graphical interface you will be able to see text colors you have buttons that you can click things are happening through your interaction you're moving your mouse and you're clicking or something or you're using your keyboard and something changes on the screen this is a command line interface we will work with the command line interface to write commands computers we interact with have no user interface that we can use anyway automating graphical user interfaces is not as easy and as reliable as simply using commands the command line interface that computers have is called the shell the shell is simply the outer layer of the system the only thing that we can see from outside and interact with so we send these commands to the system and the system will run them this is how we can interact with it when using gitlab ci we essentially automate a set of commands which we execute in a particular order while i will explain every command that we'll be using throughout the course if this is something new there's absolutely no replacement for trying things on your own please see the resources in the course notes for this lesson on setting up a linux environment on your own computer and which linux commands you should know let's talk for a minute about the gitlab architecture at the minimum the gitlab architecture for working with pipelines contains the gitlab server and at least one gitlab runner the gitlab server manages the execution of the pipeline and its jobs and stores the results the gitlab server knows what needs to be done but does not do this itself when a job needs to be executed it will find a runner to run the job a runner is a simple program that executes the job a working gitlab setup must have at least one runner but quite often there are more of them to help distribute the load a runner will retrieve a set of instructions from the gitlab server download and start the docker image specified get the files from the project git repository run all the commands specified in the job and report back the result of the execution of the gitlab server once the job has finished the docker container will be destroyed if docker is something new to you check the course notes for a quick introduction what's important to know is that the git repository of the project will not contain any of the files created during the job execution let's go back to our project you will see here that inside the git repository there's no build folder there's no computer.txt so what exactly is happening let's go inside one of the jobs and quickly go through the log so that you can understand what is going on so right here on top you'll have information about the runner that is executing the job and a runner will also have an executor in this case the executor is docker machine and here on line four you will see here which image is being downloaded and you'll see here pulling docker image alpine because this is what we have specified and then essentially the environment is being prepared and then in the upcoming steps you get the files from the git repository so essentially all the files that you have inside the git repository will also be available here inside the runner so after this point the docker container has been started you have all the project files and then you start executing the commands that you have specified inside the pipeline so in this case we're creating the folder we're creating these files we're putting some text inside there and then at the end we're not doing anything else so the job succeeds because there are no errors and the container is destroyed so this docker container that has been created in the beginning just maybe a few seconds ago is then destroyed you cannot log into it you cannot see it anymore it doesn't exist anymore so it has done its job has executed this command that we have specified and then it has been destroyed since every job runs in a container this allows for isolation and flexibility by having our job configuration stored in the yaml file we describe how the environment where the job is running should look like in practice we don't know or care which machine has actually executed the job also this architecture ensures that we can add or remove runners as needed while the gitlab server is a complex piece of software composed of multiple services the gitlab runner has a relatively simple installation and can run on a dedicated server or even on your laptop now right here on top of the job you have seen some information in regards to the runner so the question is where is this job actually running and to be able to understand this we'll have to go here to the project settings cicd and here we're going to expand runners and you'll see here two categories there are specific runners and they are shared runners now for this job we have used shared runners any project can use shared runners within a gitlab installation these shares runners are offered by gitlab.com and are shared between all users now as you've seen there are multiple runners here and honestly we don't really care which of these runners picks up our job because we have defined exactly what our job should do which docker image we want to use which command should be executed so as long as that runner knows how to deal with a docker image and was able to execute these commands then we're fine with any runner now this is an oversimplified version of the gitlab architecture the main idea i want you to get from this is that we are using docker containers in our jobs and every time for every job the gitlab runner will start a new docker container will execute any commands that we have and then when the job execution is done that docker container will be destroyed so now we have created this laptop and have defined all the steps inside the build laptop job and of course with this cad command we have also visually verified that indeed this computer.txt file contains everything that we expect but we really want to automate this process we don't want to go inside the job itself and to look and see if everything was successful so let's expand our pipeline and add a test job we want to make sure that our laptop contains all components so i'm going to go ahead here and create a new job i'm going to call it test laptop and we're going to use the same image the alpine linux image and we're going to also define the script and this time we need to find a way to actually check for example if this file has indeed been created so this would be like a very basic test and what we can do here is to use the test command so the test command will allow us to check if this file has been created and this command also has a flag which we'll gonna write with dash f and then we'll specify the path to the file so essentially this command will ensure that this file really exists and if it doesn't exist it will fail the job so let's go ahead commit this and see how the pipeline looks like if we're looking at the pipeline we'll notice something unusual and that is we're building the laptop and at the same time we're testing it and essentially what we wanted to do is to have this in two separate stages right we first build and then we test now currently what we did is we assigned by default both of these jobs to the test stage they both are running in parallel but of course these are not the kind of jobs that we can execute in parallel because they depend on one another particularly the test job depends on first the build job completely so i'm not even going to look at why the test build failed because by the way this pipeline looks like doesn't make a lot of sense and we have to go back to the concept of stages so what we want to do is to have two different stages and we're going to change our pipeline configuration and define these stages we want to have a build stage and we want to have a test stage in gitlab we can go ahead and define another configuration which is called stages and here as a list we can define the stages that we want to have so we can see here we want to have the build stage and we want to have the test stage and in order to specify which job belongs to which stage in the job configuration we will also say to which stage this job belongs so for example we want the build laptop job to belong to the build stage so i'm going to write here stage and the stage will be built the same goes for the test laptop now if you don't specify a stage here it will automatically belong to the test stage by default but actually you want to make it as explicit as possible so we're going to write here stage test so we here we have defined the stages we have assigned the build laptop job to the build stage and we have assigned the test laptop job to the test stage so let's commit these changes and take a look again at the pipeline if we're looking at the pipeline we'll be able to see now the two stages that we have build and test so these stages will run one after the other this stage does not start until the build stage is over so first we have to build a laptop and then we can start with a test unfortunately this is still failing so in this case we really have to take a look inside this job and understand what is going on and we'll see here that the last command that we have executed is this test command here so up to this point everything seems to be working fine this has been the last command that we have executed and somehow this job has failed in the next lecture we'll try to understand why do pipelines fail so let's jump into that and continue debugging this problem so let's try to understand why did this job fail looking here inside the job blocks we'll be able to see here this error says job failed exit code 1. exit codes are a way to communicate if the execution of a program has been successful or not an exit code 0 will indicate that a program has executed successfully any other exit code which can be a number from 1 to 255 will indicate failure so in this case exit code 1 is not a 0 so it means something has failed this exit code is issued by one of the commands that we have in our script as soon as one of these commands will execute an exit code execution of the job will stop in this case we have only one command so it's relatively easy to figure out which command has issued this exit code most likely it is the last command that you see here inside the logs so in this case what test is trying us to tell is that it has tested for the existence of this file and couldn't find it if the file would have been there would have gotten an exit code 0 and the execution would have continued in this case the file is for some reason not there and we have retrieved an exit code 1. this tells gitlab that this job has failed and then the entire execution will stop if a job in the pipeline fails by default the entire pipeline will be marked as failed as well let me show you an example let's go back to our pipeline definition and we already know that the build laptop job works just fine so let's see here somewhere in the middle we're going to add a new command and this command will be the exit command and we're going to exit with a code one so what we should observe is that commands echo make tier and touch are still being executed but for example here where we're putting the main board we're using cat and again we're putting the keyboard inside this file this shouldn't be executed anymore so now i've executed a pipeline again you will see here that a build job failed and because this job failed the rest of the pipeline was not executed so it was interrupted as soon as something has failed it actually doesn't really make a lot of sense to continue with the rest of the stages if one of the jobs before has failed so let's go inside it and take a look at the execution logs to see what exactly has happened here and you'll be able to see as i said we have here the this echo command that's being executed we have created this directory we have created this file and then we have exit one and then we says here job failed exit code one now we have forced this failure here but i just wanted to demonstrate like where in the execution you can notice that something went wrong and actually going through these commands that are executed looking at your job configuration trying to figure out what has happened what is the last command that did something sometimes inside the logs you will find hints in regards to what went wrong in this case there are not a lot of hints in terms of what went wrong but these are also very simple commands but it's very important to read the logs from from the beginning as high as possible to understand like which docker image has been used what is happening before are there maybe any any warnings or any hints that something went wrong this case that's not the case and then to locate okay what was the last command that i executed why did this command fail so i'm gonna remove this exit code and we're gonna continue with the rest of the course and try to understand how we can get the simple pipeline to run let's go back to our pipeline configuration and understand what are we doing wrong as you probably remember from the architecture discussion every job runs independently which means here the build laptop job will start the docker container will create this folder and this file will put this text as instructed and then at the end we'll destroy the container meaning the file that we had created inside this docker container will be gone as well and test laptop will start a completely new container that doesn't have this file there and for that reason it cannot pass this test file will always tell us that there is no file there because from where could this file come now does this mean that we can only use a single job meaning that we need to move this command here to test inside the build laptop well that would be very inconvenient because then would have a big job that does everything and would kind of lose the overview of what our pipeline steps really are now there is a way to save the output of the job this case the output of the job what we are actually interested in this job is this file including this folder and gitlab has this concept of artifacts now an artifact is essentially a job output it's something that's coming out of the job that we really want to save it's not something that we want to throw away for example we may have used any other files or any other commands within the job but we're only interested in the final output that we have here so in order to tell gitlab hey i really want to keep this file in this folder we need to define the artifacts so the artifacts are an additional configuration an additional keyword that we add to our pipeline as the name tells its artifacts don't write it artifact because gitlab will not recognize that and as a property of artifacts we're going to use paths so notice it is indented it's not a list and then below paths we can add a folder or a file that we want to save now in this case we're going to tell gitlab save everything that is inside this build folder so now let's give it a run and see how the pipeline performs this time now if we're looking at the pipeline execution we now see that both building the laptop and testing the laptop are successful so what exactly has happened behind the scenes did we reuse the same docker container or what has happened there well to understand exactly what has happened and how these jobs now work we have to go inside the logs and we try to understand what exactly did this build job do differently this time and what i want you to notice here is that towards the end if you compare it to the logs of the previous job we also have this indication that something is happening and it will tell you here uploading artifacts for a successful job and will tell you here which artifacts are being uploaded we'll reference here the build folder and says that two files and directories were found and these are being uploaded to the coordinator not a coordinator to put it very simply essentially the gitlab server so in this case the runner has finished this job has noticed inside the configuration oh i need to do something with these files and we'll essentially archive these files and we'll give them back to the gitlab server and tell them hey i'm finished with this job i'm just gonna destroy this docker image you wanted to keep these files so you know here we go just handle these files i'm i'm done with my job here so essentially the runner is not saving these files they're being saved somewhere else in the storage now when the next job is being executed something pretty similar is happening i'm gonna go here to the test laptop job and what you see here in the beginning there's also a new indication inside the logs that something different is happening and we'll see here downloading artifacts and says again downloading artifacts from coordinator which essentially means we now are downloading this build folder inside the new docker container that we have just created we have managed to copy this files from one job to the next one and this is why now this test command is able to find the build folder and the computer.txt file inside it and the job is passing if this job is still failing for some reason it's always a good idea to take a look at the job that has generated the artifacts so in order to do that again we're gonna go and visit the pipeline and if we go inside the build laptop job here on the right hand side you should see some information in regards to the job artifacts and in order for these job artifacts to exist they are saved even after the job has terminated and you have the possibility of inspecting them so if you're not really sure like what is the contents of the file you don't really need to go inside the build pipeline and make some debugging there you could do that but essentially what has been saved here is the final version of the artifacts that are being used so you can go here inside browse you'll find here the build folder that we have specified and we'll find here the computer.txt and of course you can download this file and you can take a look at it after you download it on your own computer and see if it has the right content make sure that when you're testing this you're actually giving the correct path and not some other path that's a very important way on how you can actually inspect the outputs of the job at this point i wouldn't say that we are really done with testing the build output yes this file exists and we have now tested it so this is a very basic test that we have written here but how about checking the contents of the file to see if it contains the main board and the keyboard as we expect again just using a command like cat and displaying this into logs doesn't really help us automate this we need a tool that when it doesn't find the respective text in the file it will issue an exit code and tell us that that text is not really present there and for that purpose we're going to use the grep command so this is the grep command and grep is a cli tool that allows us to search for a specific string in a file so we're going to specify here for example the string to be main board and i'm going to copy it from above just to make sure i don't make any mistakes there and we also can specify in which file we're looking for this so we know that this file already exists so this is an additional test here that we're adding on top of this and now we actually know that the file exists but now we want to check if this word main board is inside the file and of course we can also duplicate this and write an additional test here gonna check for the keyboard as well we're checking for the main board and the keyboard now grep is really complex command and can support regular expressions and many other advanced features but i won't get into those i'm gonna commit these changes and in a few seconds take a look at the pipeline all right so now if we're taking a look at the pipeline we'll see the build job is still successful the test job is still successful so if we're looking inside here what we'll be able to see we'll be able to see some log outputs and then grab here is looking for the word main board inside this file and is able to find it will be displayed here and it's also looking for the word keyboard inside this file and is able to find it what's important about writing tests is to also ensure that your pipeline will fail if one of these tests doesn't work and sometimes you may think that the command does what it's supposed to do but the best way to make sure that you're really mastering that command to actually introduce an error inside your build job essentially and to check if the test job will fail so let's go ahead and try that out so here inside our build job what can we do well for example we could try and make a change here for example i'm going to remove the m from main board and i'm gonna simply commit these changes and to see if this job is now failing we've been looking now inside the tests we'll be able to see here that the last command that was executed was grep you will see it in comparison to the previous one there's no text being outputted below this is the last command that was executed we're getting an exit code 1 which essentially means grep has looked inside this file couldn't find the word main board so let's go back and fix our pipeline but this has been a very important test because if you're not really checking that our pipeline will fail at one point then whatever we did inside the pipeline in terms of testing is really not very useful i'm gonna go and add here main board back to the configuration and as we expect now the pipeline will also work again tests also play a very particular role when we're working on this pipeline when we're building software there are many various levels of testing but essentially tests allow us to make changes to our software or to our build process and to ensure that everything works the same for example i heard that if we are using this operator here and we're putting text inside a file that this approach doesn't really require to have a file already created with the touch command so how about we try this out and see if we can rely on the tests to get this to work so i'm going to simply remove this touch command and i'm going to commit a configuration we'll see what's going on so let's take a look at the pipeline and what do we see this test job is successful and if we really want to manually check once again we can go to the build laptop job we can take a look at the artifacts we'll see the build folder is there the computer.txt file is there so apparently we didn't need to have this touch command in our configuration and by having the tests we have gained an additional level of confidence that whatever changes we're making we kind of trust the tests and if the pipeline is passing we know that in the end we'll have this computer.txt file and i will have the proper content let's take another look at our pipeline configuration what if we need to change the name of the file so so far we have used this name computer.txt but what if we need to call it for example laptop.txt now quite often we don't really like when we have something for example a file name or something that could change and have that spread also the entire pipeline because if we need to make a change we need to identify all the currencies now this is a very simple example and it is relatively easy to identify this but quite often for example going into a large file and doing something like a replace all can lead to some undesirable errors that could occur so quite often when we have something that we are searching multiple times inside a file or inside the configuration we want to put that into a variable and that way if it's inside a variable if we need to make changes to that particular value we only have to change it once and then it would be replaced all over so how do we define a variable well we can go inside our script block and define a variable here so for example you can just define a variable called build file name and we see i'm writing this in lowercase and we have also some underscores to separate the words and then we can use the equal sign and write something like laptop.txt now in order to reference this variable we're going to copy the name and go whenever we need to have this and we're going to start with the dollar sign and then the variable name so we're referencing the variable by using the dollar sign before the variable name now this is a local variable that is available only inside this script and it will only be available inside this job so for this job we need to do the same thing as well as you've noticed quite often we write local variables in lowercase and using lowercase helps avoid any conflicts with any other existing variables now this is one way how we can do this but there's also the possibility of defining a variable block we can go inside the configuration of the job and write something like variables this will not be a list that's very important so this will be build file name column and the value will be laptop.txt so this is essentially almost the same as writing it like this but we are getting it away from the script and we are letting gitlab do this for us and of course if we need to do this we have to take it from here and also put this in the other job so we'll have some duplication here in terms of the configuration we can also define a global variable which is available for all jobs in order to define something globally all we have to do is take it from here and due to the indentation we're going to move it outside of the job and everything that's happening here is on a global level so i'm gonna move it here at the root of this document essentially here we define variables and this is the name and whenever we have this we have to use this syntax and of course we can go ahead and remove it from the test job because it would also be available there now this is essentially as defining an environment variable which is available for the entire system while there is no hard rule we typically write environment variables in all caps and still use underscores to separate the words because we are inside an ide i'm going to select this text use the f1 command and then i'm going to transform to uppercase so this is how i want it to look like build file name and wherever i want to have this again dollar sign and this name so for example i'm going to have it here for computer as well here this command which we may decide to replace or not really depends on us but i'm going to replace it here as well i'm going to use it here and here and here so again wherever we had computer.txt we have now replaced this with this environment variable and whenever we need to make changes to it we can easily just change it here it's pretty easy to see maybe adding some other variables that we have inside our pipelines it will make the managing of these details much easier well in this example it is not necessary depending on which characters you include in your variable value you may need to put everything between quotes this is just something to keep in mind but this is a very simple text here that we have so that will not cause any conflicts in general with the yaml syntax so let's commit these changes and see if the pipeline is still working as it should the pipeline is running successfully and we can go inside the build job that we had here and we can take a look at the artifacts and indeed we'll see that now we are using this file name laptop.txt and no longer computer.txt so i have mentioned devops quite a few times and by now you have heard of devops as well so what is devops let me tell you first what devops is not devops is not a standard or a specification different organizations may have a different understanding of devops devops is not a tool or a particular software nor is it something you do if you use a particular tool or a set of tools devops is a cultural thing it represents a change in mindset let's take a look at the following example you have a customer who wants a new feature a business person let's call them a project manager would try to understand what the customer wants write some specifications and hand them over to the developers the developers will build this feature pass it on to the testers who would test it once ready the project manager would review the work and if all looks good would ask the developers to pass the software package to the sysadmins to deploy it as you can see there's a lot of passing stuff around and in the end if something goes wrong and often things go wrong in such situations everyone is unhappy and everyone else is to blame so why does it happen because every group has a different perspective on the work there is no real collaboration and understanding between these groups let's zoom in on the relation between the developers and the sys admins the developers are responsible for building software ensuring that all these cool features that the customers want make it into the product the it operations team is responsible for building and maintaining the t infrastructure ensuring the i.t systems run smoothly securely and with as little downtime as possible do these groups have something in common yes of course the software the product the problem is the idea operation team knows very little about the software they need to operate and the developers know very little about the infrastructure where the software is running so devops is a set of practices that tries to address this problem but to say that devops is just a combination of development and operation would be an understatement actually everyone mentioned before works on the software just in a different capacity since the final outcome impacts everyone it makes sense for all these groups to collaborate the cultural shift that devops brings is also tightly connected to the agile movement in an ever more complex environment where business conditions and requirements change all the time and where we need to juggle tons of tools and technologies every day the best culture is not one of blaming and finger-pointing but one of experimentation and learning from past mistakes so we want to have everyone collaborate instead of working in silos and stages instead of finger pointing everyone takes responsibility for the final outcome if the final product works and the customers or users of the product are happy everyone wins the customers the project managers the developers the testers the sys admins and anyone else i did not mention everyone wins however devops is more than just culture to succeed organizations adopting devops also focus on automating their tasks manual and repetitive work is a productivity killer and this is what we are going to address in this course automatically building and deploying software which falls under a practice called cicd we want to automate as much as possible to save time and give us the chance to put that time to good use instead of manually repeating the same tasks over and over again but to automate things we need to get good at using the shell working with cli tools reading documentation writing scripts quite often you may see devops being represented by this image while this does not give a complete picture of what devops really is it does show a series of steps a typical software product goes through from planning all the way to operating and monitoring the most important thing i want you to notice in this representation is that this process never stops it goes on and on in an endless loop this means that we continue going through these steps with each iteration or new version of the software what is not represented here is the feedback that goes back into the product devops goes hand in hand with the agile movement if adrian and scrum are new to you make sure to add this to your to-do list nowadays many organizations go through an agile transformation and value individuals who know what agile and scrum are regardless of their role i've added some resources you may want to look into in the course notes if you have some free time while commuting or doing other things around your house i highly recommend you listen to the phoenix project as an audiobook it is an accurate description of what companies that are not adopting devops go on a day-to-day basis and realistically portrays such a transition this is by no means a technical book and i'm sure it will be a fun listen so devops is a set of practices that helps us build successful products to do that we need a shift in thinking and new tools that support automation however i must warn you that you can use tools that have devops written all over them and still not do devops so devops is so much more than just adopting a particular tool with that being said let's continue diving into gitlab ci in this unit we will start working on a simple project we want to automate any of the manual steps required for integrating the changes of multiple developers and create a pipeline that will build and test the software we are creating in other words we will do continuous integration continuous integration is a practice and the first step when doing devops usually we're not the only ones working on a project and when we're doing continuous integration we're integrating our code with the code other developers created it means that every time we make changes to the code that code is being tested and integrated with the work someone else did it is called continuous integration because we integrate work continuously as it happens we don't wait for anything to do that we don't want to integrate work once per week or once per month as it can already be too late or too costly to resolve some issues the more we wait the higher the chances we will run into integration issues in this unit we will use gitlab to verify any changes and integrate them in the project i'm going to be honest with you as we build more advanced pipelines you will most likely encounter some issues if you haven't done it yet go right now to the video description and open the course notes there you will find important resources and troubleshooting tips finally let's do a quick recap when we have multiple developers working against the same code repository ci is a pipeline that allows us to add and integrate our changes even multiple times per day what comes out is a new version of the product if you're still unsure about continuous integration at this point don't worry we'll implement ci in our development process in the upcoming lessons for the rest of the course we'll be using this project this is a simple website built with react which is a javascript technology developed by facebook now we don't want to get too much into the technical details because they don't really matter so much at this point but the first step in order to be able to make changes to this repository is to make a copy of it so for example if you're trying here to open a web ide in this project i will get this option to fork the project by the way you will find a link to this project in the course notes and the course notes are linked in the video description so we can click here on fork and we'll make a copy of this project under our account now that we made a copy out of this project we can then open the web ide and start making changes to it and particularly what we're trying to do is to create the pipeline so let me give you an overview of the tasks that we are trying to automate essentially here in this project we have a couple of files one of these files is this package.json file and this file essentially documents which requirements this project has and in order to actually run this project we first need to install this requirement so locally i already have a copy of this project and the command to install these requirements is called yarn install so now all the requirements have been installed the next step would be to create a build and that would be done using the command yarn build and during this process what has actually happened is that a build folder has been created and this build folder contains multiple files that are required for the website so let me give you an idea how this website looks like and what we actually did here so i'm going to run the command serv minus s and going to specify the build folder and now essentially we have started a server we started an http server which is serving the files available there so i'm going to open this address in a new tab and this is how the website looks like so essentially what we're trying to do in this section is to automate these steps so we want to install the dependencies we want to create a build we want to test the bill to see if the website is working and i've shown you these tools because it is always a good idea to be familiar with the cli tools that we'll be using in gitlab ci now in gitlab we try to automate any manual steps but before we do that we must know and understand these steps we cannot jump into automation before we understand what the commands that we want to do are actually doing now i'm not really referring in particular to the commands that i've shown you here because they are specific to this project you may be using python or java or anything else so you don't need to be familiar with these tools in particular i will explain to you what they do and how they work however what is important to understand is the concepts the concepts remain the same and this is what we are actually focusing on in this course we are focusing on understanding the concepts around automation so let's begin creating the ci pipeline for this project so i'm going to go ahead and create here a new file and of course the definition file for the pipeline will be dot gitlab.ci dot yaml the first job that we want to add here is build website and what we are trying to do here where we're trying to build this website and why do we need to build the website well essentially or most of their projects do have a build step in this case we're essentially creating some production files production ready files which are smaller optimized for production from some source files so we have here in the source files you will see here an app.js and any other file so essentially the build process will take all these files and will make them smaller and will put them together in a way other programming languages may need to have a compilation step or any other steps so typically something happens in the build process where we actually are putting our project together now of course we don't want to do that manually from our computer we want to let gitlab do this for us so let's go ahead and write here the script for this the first step is to essentially run yarn and yarn is a tool that is helping us build this project this is specific for javascript projects essentially node.js projects and yarn is a tool that can be used for getting dependencies they can also be used for building the software so the command that we are running here is build locally as you remember the first thing that i did was to do a yarn install to install dependencies and this is something that needs to happen before the build so every time when we are building this website we need to get the dependencies to make sure that we have all the dependencies that we need and to ensure that all of them are up to date and because they don't remain anywhere in the container we need to do that all the time and normally locally would do that only when we need to do that so for example when we know that we need a newer dependency of a software package but gitlab doesn't have this information so we kind of like need to run this all the time you also need to specify your docker image so let's try for example alpine which we have used before so essentially what i'm trying to do is to replicate the commands that i've executed on my computer installing dependencies and building the project so let's commit these changes and see what pipeline does i'm going to commit them to the main branch and click on commit here we can take a look at what the job is doing and we'll be able to see here that we get an exit code now remember any exit code that is not zero will lead to a job failure and it says here job failed now why did this job fail well we have to look like what is the command that we try to execute and we'll find here something saying that yarn not found so essentially what this means is that the docker image that we have used does not have yarn so how do we install yarn or how do we normally do this now the thing is we don't have to use this alpine image that you have seen here essentially for most projects and this includes node.js which is actually what we're using here and which i already have installed locally this is why it worked locally there are official docker images that we can use and the central repository for such images is docker hub so this is a public docker repository so for example if i'm typing here node i will be able to find here the official image for node and i can essentially instead of using alpine i can simply use here node so let me go back to the project and i'm going to write here node now the thing is when we're doing here when you're writing alpine or node what is actually happening is that we are always getting the latest version of that docker image sometimes it may work but sometimes the latest version may contain some breaking changes which may lead to things not working anymore in our pipeline if one day we're getting one version and next day we're getting something else without us making any changes things may break so for that reason it is generally not a good idea to just use node for example or just to use alpine as we have done before it is better to specify a version to put it down like which version do we need here and to write it down as essentially as a tag now how do we know which version do we need now for node what we're going to do here we're going to head to nodejs.org and here you'll see here two versions that are currently available for download what we want to do is we want to use the lts version now you will see here that the latest lts version that i'm currently seeing right now is 16.13.2 now what's important here is the major version which is 16. when you are watching this it's important that you come to this page and you look at a number here probably it will increase over time most likely it will increase over time you'll get a different version it's important here that you get the latest lts version that you see here all right so let's go inside our pipeline and then write this 16 version here so the way we're going to do here we have node which is the base image and then we can specify a tag by writing column and then i'm going to write 16 right and most likely this version is already available on node but you can always make sure and check here go to the tags and see if this specific tag is then available there but this major tags typically they are available and will have no issues downloading this so let's commit the changes and see how it goes and again i'm gonna push to the main branch when this executing you will be able to see here which image is being downloaded and you will see here it's node and the tag is for the version 16. now this job is now requiring much much longer to run you will see here the duration in this case was one minute and 35 seconds and this is because we have to download what is a relatively large docker image this node image and then we are installing the dependencies and you'll see here that these dependencies take 44 seconds to be installed to figure out which dependencies are required to download them from the internet that takes a bit and then we are actually doing the build and this also takes a few seconds to complete but fortunately everything is then successful in the end and this is the most important step at this point we don't want to waste a lot of time when we are actually executing these builds and i know for a fact that this note image is a few hundred megabytes large in size this is because it contains a lot of tools and a lot of dependencies which we may not need so for that reason it's always a good idea for these larger images to go here on the specific image to click here on the text tab and generally to take a look at like what is the size of these images and what's happening with them because some of them are quite big so we could theoretically search here for version 16. and by default we can see here that this is the one tag that's more specific it's not exactly what we're using we will see here version 16 something it has about 332 megabytes so if we're selecting to use this image every time we'll have to download 300 something megabytes that's a lot of download and a lot of time that we are wasting just to start this image so for that reason what i typically do is i go here in the tags and i search for alpine sometimes slim or other images can also be a good idea and what i'm interested in is something like 16 alpine so let me write directly 16 alpine and this looks absolutely fine here so 16 will be again the node version that we are trying to use and if we're looking here at the size take a look at this it's 38 megabytes so what we're going to do we're going to simply take this and go in our pipeline and replace 16 with this stack so here inside instead of using 16 we're going to use 16-alpine gonna commit these changes we're gonna take a look again at the pipeline to see how long does it take right now to build this so this job still needed quite a bit of time one minute and 26 seconds you may see this duration varying but generally it is a very good practice to use images that are as small as possible because this can save time and you may see that maybe this job will even go below one minute it really depends like how fast the runner will pick up the job and will able to start this image but the main idea is the same we now have managed to automate the first steps in our project we're installing some dependencies and then we are running the build so this seems to be working just fine just to recap we are using this node image before we have tried using alpine image now the alpine image didn't work because it didn't have node js installed now essentially what we're using here is essentially the same image as before the same alpine image alpine linux image but it has node installed so it has the dependency that we need and this node dependency contains yarn this is why yarn didn't work before now it's working just fine and it's building the project the most important thing that you can do when you're learning something new is practicing and i want to give you the opportunity in this course to practice along so not just following along what i'm doing but you also have the opportunity to do something on your own and for the next assignment i already think that you have to know how necessary in order to do that what we're trying to do is to create two new additional jobs in this pipeline this is your job now to write these jobs so what are these jobs all about the first job is trying to test the website and the website is currently inside the build folders if i go inside the build folder you will see here a list of files now your job is to ensure that the index.html file is available inside the build folder so this is the first job that you need to create the second job i want you to create is in regards to running units so in order to run unit tests the command that we are using is yarn test and the only thing you need to do is to create a job and to run this command to take a look at the logs and see that the tests have indeed been executed the upcoming lesson will contain the solution to this but please it's super important that you pause this video and try this on your own try as much as you can because this is the best way to learn and to ensure that what i'm showing you in this course is something that you will be able to use in your projects as well i hope you have tried to solve this assignment on your own but anyway this is how i would approach this problem so i'm here inside the editor for the pipeline and let's begin just with the skeleton so what are we trying to do we have two new jobs we have test website where we're essentially trying to test the output of the build website job and we also have a unit tests job so first of all we have to think about stages what we want to do happens essentially after the build well it really depends a bit so for example the test website this definitely needs to happen after the build the unit tests they don't necessarily need to happen after the build but we're going to put them inside a different stage as well so let's go ahead and define the stages here so we're going to have two stages we're going to have here built and we're going to have another stage which is called test and what we need to do is to assign this jobs to a stage so the build will be assigned to the stage built of course and then the test website will be assigned to the stage test and the same goes with the unit test right in order to test the website what we are trying to do well let's try and write the script we are trying to test if we have an index.html file there so as you probably remember the command test dash f so this we are testing for the existing file this needs to be inside the build folder and the name of the file is index.html now what we haven't done so far is inside this build website job to declare artifacts so as it is right now this command will fail so what we have to do here is of course think about the artifacts so which artifacts do you have we have to define the paths and the only paths that we're interested in is the build path so then this will be able to test for this file the next thing we need to think about is which image do we need for test website so theoretically we could use node 16 but actually this command test doesn't require something like that so we could just use alpine and test is a pretty general command so we don't have to worry about specifying a version or anything like that so just going with alpine should be just fine there's of the unit test we essentially need this node image because we'll be using yarn so i'm going to simply go ahead and copy this image that we have used in the build website and of course the script will also be kind of a similar uh we still need to install dependencies so yarn install is necessary here when we're trying to run the unit test and the command that we want to run is yarn test so let's double check to make sure that we have everything in place we have defined two stages build and test we have assigned build website to the stage build and we have assigned the test website and the unit tests we have assigned them both to the stage test so this means these two jobs will run in parallel here we're just using the test command to verify if this file exists and here we're installing the dependencies and then running the tests so i'm gonna go ahead commit these changes and we'll inspect together the pipeline and after a minute or two this buy plan will succeed and you'll notice here the two stages build and test we're building a website and then we're testing what we have the unit tests are not necessarily dependent on the build itself but we put them together in a test so if looking here at test website what do we see well we see here the command test minus f so it's testing if this file actually exists if it's part of the job artifacts of course if you're looking inside the build website you can go here and look at the artifacts see here the build folder we'll see here multiple files so not only the index.html file but we'll see here some images some additional files that we don't really care so much at this point but they are available there so that's the most important aspect that we really care about and here the unit tests the unit tests are part of the project and generally when you're writing code we are also writing unit tests to document that what we're doing there is working properly you will see here the command that has triggered the test has been yarn test there's only one file here that contains only one test all these tests have been executed and because the tests are passing the job is also succeeding any of these tests would not work then the job will be failing and the entire pipeline will also be failing the first tool we need for integrating work is already in place we are using git for versioning our code we also use gitlab as a central git repository we take git for granting nowadays but it solves a critical issue and is one of the first devops tools you need to know while this course does not go into git i highly recommend checking the course notes for a comprehensive git tutorial so in our current model every developer pushes changes to the main branch whatever version of code has been pushed last time in the main branch that is the latest version anyone wanting to add new changes will have to make them compatible with the main branch however this model does have a serious flaw nothing is safeguarding the main branch what if the latest changed has introduced an issue and we can't build a project anymore or what if the tests are failing this is now a massive problem as nobody on the team can continue working and we can't deliver new versions of the software a broken main branch is like window production halts in a factory is not good for business we must ensure that the main branch does not contain any broken code the main branch should always work and allow us to deliver new versions of the software anytime we need to do that so how do we solve this the idea is simple we just don't push untested work in the main branch since we can't trust that developers remember to run all tests locally before pushing a change we want to take advantage of automation and gitlab to automatically run the tests before adding any changes to the main branch the idea is to work with other branches which are later integrated into the main branch this way every developer can work independently of other developers there are various git workflows but we will use one that is simple to understand it's called the git feature branch workflow the idea is simple for each new feature bug idea experiment or change we want to make we create a new branch we push our changes there let the pipeline run on the branch and if everything seems okay we can integrate the changes in the main branch so in other words we simulate running the main branch before we actually run the main branch if our branch pipeline fails no worries all other developers are unaffected and we can focus on fixing it or if we don't like how things turned out we can just abandon the branch and delete it no hard feelings there either as part of working with branches we create a merge request that allows other developers to review our code before it gets merged a much request is essentially one developer asking for the changes to be added to the main branch the changes are reviewed and if nobody has objections we can merge them so this is the plan for the upcoming lessons to start working with git branches and merge requests so let's get to work so how can we create merge requests in gitlab first of all to ensure that the chances of breaking the main branch are as small as possible we need to tweak a few settings so we're going to go here to settings general and right here in the middle you should see merge requests i'm going to expand that and what i like to use when using merge requests is to use this fast forward merge essentially no merge commits are created and generally the history remains much cleaner there's also the possibility of squashing commits directly from gitlab and you can put it here and encourage scratching commits is when you're pushing multiple changes to a branch and instead of pushing all these changes back into the main branch we scratch them all together so we essentially like we have only one commit again makes the history much easier to read going forward here in the merge checks we want to make sure that the pipelines succeed before we are merging something so this is a super important setting here that we have so let's go ahead here go at the bottom and click on save changes and additionally again from settings when go here to repository and from the repository you're going to go here to protected branches i'm going to expand this and what we want to do here is we want to protect the main branch so essentially we don't want to commit changes to the main branch anymore we want to prohibit that so nobody will be allowed to directly commit something to the main branch so in order to do that we have to go here to allow to push and instead of having something selected or some role selected here can i use no one so no one is allowed to push to this protected branch only if it goes through a merge request so these are the initial settings that we need to do and we'll be able to see here now let's try and make some changes and open the web ide open the pipeline file and now let's say i'm trying to add here a new stage where i'm trying to do some checks for example there is linter that i can use and linter is simply a static code analyzes tool that is used to identify potential programming errors stylistic issues and sometimes you know questionable code constructs since it is static it does not actually run the application it just looks at the source code and most projects do tend to have such linter inside and just for the sake of completion i also want to add here linter so i'm gonna go ahead here and i'm gonna write here linter this is the name of the job the image that i'm using is still node here and the reason for that is inside the script the command that we are running is actually yarn lint and of course we also need to install all dependencies and additionally we have the possibility of assigning this job to a stage now by default gitlab comes with some predefined stages prefined stages include a press stage build stage a test stage deploy stage and post stage so these are all actually predefined and to be honest this is just to make it clear like which stages we are defining here and which stages we are using but these are already defined by gitlab so we're just essentially redefining something that exists so for this linter i could go ahead and use a stage and this stage name will be dot pre and this linter has absolutely no dependencies on the build itself the same goes for example here with the unit tests the unit tests are also not dependent on the build so i could go ahead and move them to the same stage just to make this clear so there's a dot there and because this is a predefined stage we don't need to redefine it again so i could go ahead and write it here but it's not really needed all right now going back here the editor that i opened here is from the main branch you will see here main is selected right and let's go ahead and commit and you'll see here i'm no longer allowed to commit to the main branch so this is by default disabled the settings that we did ensure that we are no longer allowed to directly make changes to the main branch so we have to create a new branch quite often when we are creating branches we tend to use some naming conventions now totally up to you what you want to use or how your organization uses that quite often you will have something like feature for example for what slash and then a name of a feature sometimes you may reference a ticket or something like that so for example you have your ticket number one two three four add linter so you will see here i'm adding no spaces or anything like that it's totally up to you how you use these forward slashes hyphens as separators and so on you'll also see here the possibility of starting a new merge request so i'm essentially these changes now are not committed i'm going to create a new branch the branch name will be feature forward slash and some name i'm gonna click here on commit this will open an additional window so the branch has been created but now i'm also opening a new merge request and here in the merge request there are a few things that i should fill out so for example the title is not very suggestive so i'm going to write here something like add linter to the project you can also go ahead and provide a description this is also useful for the people who are looking at this merge request to know why this is important what this feature is bringing or if it is a bug fix which issues is this fixing and so on so you could do that there are also some additional labels and options that you can set here i'm not going to go over them because they are essentially relatively easy to explain and i'm going to go here and click on create merge request and now a new merge request has been created and additionally we also have here a pipeline that is running against this branch so what is happening here is that the changes that we made which you will be able to see here what we do do we added here a new stage we added here a linter uh also made some changes to the do the unit test so we can go ahead and look at these changes we have here a pipeline that's executing it's going through all these stages we have here the pre-stage the build stage and the test stage and now we can simulate running this pipeline to see if it fails if it fails we have the opportunity or of fixing things if it doesn't fail then we have the option of merging these changes the next important step in the life of a merge request is the review process especially the code review where somebody else from the team is taking a look at our changes and is giving us feedback now just in case you don't know where your merge request is you will find here on the left hand side this panel says here merge requests and we'll indicate how many merge requests are open at this time so we only have one merge request you will see here this is the title of the merge request so we're trying to add linter and you'll find here like the most important information so this is a request to merge this feature into the main branch right if you need to make changes to it you can open this branch in the id and continue making changes to it you have your information about the pipeline so we know that the pipeline has passed and someone else looking at this we'll be able to see here which commits have been made and also to have an overview of the changes inside here there are different views that you can have you can compare these files and someone can also leave comments here so for example i could simply click here on the line and ask why did you use the prestage right and you can add a comment now where does this merge request go now typically when someone has reviewed this can go ahead here and approve it so you will see here who has reviewed these changes and you may have internal roles like you need to people to review any changes before they are being merged so the changes can be approved and then they can be merged comments can appear so for example here i've added a question to these changes so maybe there's some discussion that needs to happen there feedback can be gathered from different people involved and this feedback may need to be integrated so for example as i mentioned you can still do changes to this the merge request will always have the latest changes so we can make more commits it's also possible that for some reason these changes are no longer needed they are wrong or the approach that was taken doesn't really make sense so so there's also the possibility of closing a merge request you can go here and see closed merge requests so this will be closed and no changes will be merged so that's also a possibility but typically what happens after the review after some changes have been integrated we're gonna go here on this merge button and what is happening here is we are merging this branch into the main branch and there are two options that are enabled by default the source branch so the branch that we have created will be deleted which makes sense because we don't need it anymore and in case we have multiple commits we can also squash those commits and in this case we have only one commit we can modify the commit message if we want to in this case that's not needed and then we can simply click here on the merge button this merge button is available only if the pipeline succeeded but our pipeline did succeed so this is why we can see it so now gitlab will merge these changes into the main branch and again a new pipeline will be generated and this is the pipeline for the main branch so you will see exactly the changes that we made we have here the pre-stage the build and the test these are all available here and exactly what we have run inside a merge request the pipeline is running again against the main branch and this is needed just to ensure that indeed everything is working fine and that there are no integration issues or no other issues so it's a good practice not something that may seem that it's happening somehow as a duplicate it's actually important to ensure that the main pipeline is working properly this is what we care most about to ensure that this main pipeline is working as it should work that's the reason why we have reviewed the merge request to ensure that whatever we're changing here in the main branch is as good as it gets and that the chances of breaking something are drastically reduced right in the beginning of this section i'll show you locally which are the steps they need to build this project but i've also taken the build folder and i've opened the website and you have seen something that looks like this how can we replicate this in gitlab essentially we want to make sure that we indeed have a website that's running that can be started by a server and we want to make sure that you know for example this text here shows up learn gitlab ci how can we do that inside the pipeline currently we're just checking if we have a file which is called index.html that's not really telling us the entire story so this lecture we're going to take a look at how we can do essentially what's sometimes called an integration test we're really testing that the final product is really working i'm going to open up the web ide and inside the pipeline file kind of focus on this test website job so this current test is not really that helpful we want to do something else so for example you have seen me running the following command which has been serve dash s build right so this takes the build and we'll be able to start a server and then we can take a look at what's going on so we're going to remove this because we don't need that anymore i'm going to start here server and just because i want to make sure that this is running as fast as possible and that we're not wasting a lot of time i'm going to go ahead and disable some jobs so i'm going to disable here the linter job that's not really needed in the beginning i'm going to also disable the unit test so to disable the job you simply add a dot in front of the job name and that job will be disabled let's go ahead and commit these changes and again we'll have to create a new branch call this feature integration test i'm going to start a new merge request as well i'm going to call it add integration tests i'm going to click here on create merge request if you're looking at the pipeline we'll see that the build has succeeded but there seems to be an issue with testing the website so let's take a look at the logs and understand what is going on we'll try to understand here what is the error we're getting an exit code the last command that we executed is serv and we're getting here surf not found so again we're trying to run a command this command was working locally for me but at least the docker image that we are using here does not have this command so which docker image are we using we're using alpine and alpine does not have this command so we need to find a way to install this command as well so for that we're going to go back to the editor we can go here from the merge requests click on the merge request and we'll find here open in web id and then we'll be able to see here this is essentially a list of changes that we have this is the review it shows us what we have changed but if you want to make changes to the code we're going to go here this will allow us again to select the pipeline file and make changes to it so let's see what we need to do in order to get this job to run as well now first of all we're gonna use here this same node image because serv is actually something that runs on node so it's a dependency that runs on node and we're gonna use yarn and we're gonna add this dependency i'm gonna see here yarn add serve and actually you're going to use this as a global dependency so i'm going to write here global so using yarn global add serve right this will ensure that we now have serve we can use yarn because we now have the node image again this time we are already in a merge request so we can go ahead click on commit i'm going to commit directly here to the existing branch that we have and the pipeline will be executed again be sure to watch what i'm explaining next because we have introduced an issue in this pipeline and it's a big issue but can explain in a second what that issue is i'm going to go ahead open this pipeline and i'm going to go directly to this test job and wait a bit here to see what is happening and i wait for this job to start so i want you to notice what is happening with this job so we have started this docker container which contains node then we have added this dependency which is serve and then we have executed serv which folder are reserving or serving the build folder then it says here info accepting connections on localhost port 3000 so what is going on well we have started an http server which is serving these files inside this gitlab job and when does a server end well it never ends that's the purpose of the server it will serve these files indefinitely and of course this approach that we started here is not good because we're gonna wait here forever well actually not forever jobs do have a specific timeout you will see here the timeout that is one hour so if we don't stop this job this job will run for one hour so this is an important concept when you're starting a server inside a job that job will never end or will run into this timeout so be very careful when you're doing things like that now i'm going to go ahead and manually stop this job you will see here this cancel button here we cannot wait forever so i'm going to cancel it and then this will stop now there is a way to start a server and there is a way to check that text but we still need to make some changes so i'm gonna go back again to the pipeline let's go inside the code open the pipeline and see what we have here now what do we want to do so we have managed to start this server it will run forever but how do we test something so in order to test something we have to execute the upcoming command and in order to actually get this information we need essentially an http client we need another tool that will go to this address which is you have seen there it's starting on localhost which is the host name on port 3000 and the protocol is http so we don't have a browser that we can open here so we need something that is as close as possible to a browser so that tool is curl so curl is a tool that will be able to go to this address download the website and then we can do something i'm gonna use here a pipe so this will send the output of this website to the next tool and as you remember from working with files we have the possibility of using grep so grep will search for a specific string so what is the string that we're looking for i think the string is learn gitlab ci i'm gonna check again against the website to make sure that i have this exact text here because this is what we are trying to find inside a website right so we're starting a server then we have this curl command and in grab we're getting the website and then we can search for this string we still haven't solved the problem with the server running forever we don't want it to run forever so we're going to use a small trick and i use here this and sign and what will this do is to start this process in the background so it will essentially still start the server but it will be in the background and it will not prevent this job from stopping so when the curl command has been executed this job will also stop because this command will be executed right after this one curl will not wait for the server to actually start the server needs a few seconds to start and in order to wait for that we don't know exactly when this will start so we're going to use another command which is called sleep and sleep will take a parameter here so this will be in seconds so we can decide how many seconds we want to wait just to be sure i'm gonna use here 10 seconds additionally curl is also another command which we don't have and this is actually a dependency of this alpine image so this has nothing to do with yarn or npm as serve does this is an exception so for that reason we're have to use here the package manager from alpine so a b k this is the alpine package manager and we're gonna add here curl so this is how we are adding curl as a dependency so let's give it another try and see if this works now if we're looking at the logs we'll see that this is still failing maybe this is not our day but let's go through the process again and try to understand what is going on and generally i have to tell you we have to get used with errors and the more we understand about this the more we are able to solve this kind of problem so we have added surf we have added curl and we started the server we can see here if we're still getting this information accepting connections at we waited here for 10 seconds curl has downloaded this and then has sent this to grab but we are getting here an exit code one because we're not seeing any errors from curl which means that curl has downloaded this information we have to wonder like what's going on with grep why isn't this information available there in the response now we could go and debug this a bit more but essentially when carl is downloading a website it's not really downloading what you see right now here it's actually if you click on this go here to view page source on any website what curl is actually doing is downloading this now this is the index.html file and this index.html file will download other things in the background we'll download some scripts so that information there is not really coming from the page from this page that we are getting with curl and curl doesn't have the possibility of actually rendering javascript and images and so on so it's very basic tool what we have here for example is this react app in the title so how about if we change this and we are asserting that we have this title so let's go ahead in the application and figure out where this react app appears now i change it to something with git lab so instead of using learn gitlab we're gonna simply try and use react app because we know this is available here so from the project i'm going to open the web ide if i'm not in the right branch i can simply switch the branch open the file that i want to edit so we're going to use instead of learn gitlab ci i'm going to use here react app and we are also gonna enable the rest of the jobs which are the unit tests which have been disabled the linter has been disabled and then we're gonna put all these changes together commit them to the branch we're not going to start here a new merch request we already have a merge request so we're going to disable that if it's enabled i'm going to commit again and finally the tests are working so let's go again and inspect the job logs so what we find here well we have everything that we are interested in and particularly the last part was the one that was failing and now it's able to locate this inside here so this is why you're getting back the entire text you'll see here react app it's in the response so curl and grab now work together and this job succeeds has been a bit of work but i'm happy that we have this test and it ensures that our pipeline really works a bit better and that we're more confident in what we're building i'm gonna go here to the merge request you will see here we have four commits we have here essentially the changes that we made so we reverted all the other jobs that we have disabled and we just have adapted this test website so from that point of view everything is good we're gonna squash here the commits so we can also modify the commit message commit messages add integration test so that looks good i'm gonna click here on merge and this changes will be merged into the main branch so currently our continuous integration pipeline looks like this it is divided in three separate stages and the way i've structured this is just an example there is no hard rule different programming languages and technologies may require a different order you are smart people and i'm sure you'll be able to apply these ideas to whatever you're doing however i just wanted to mention two principles or guidelines that you should consider one of the most important aspects is failing fast for example we want to ensure that the most common reasons why pipeline would fail are detected early for example here linting and unit test quite often developers forget to check their linting make a mistake or forget to run the unit test so if we notice that this is happening a lot it really makes sense to put this as the first stage of the pipeline they typically don't need a lot of time so the faster they fail the better it is because we'll get fast feedback we'll use less resources and of course we know if we're running here in parallel jobs that are similar in size that's also a relatively good way of grouping jobs together now you also have to be very careful when you're grouping jobs in peril for example if you have a job that finishes in 15 seconds and you have a job that finishes in five minutes for example let's say here linter needs 15 seconds unit test needs five minutes if the lender fails after just five seconds the unit test will still run for five minutes there's no no stopping there so it does make sense to put in parallel jobs that are similar in size another aspect that you need to consider are the dependencies between the jobs so what i mean by dependencies well for example here we cannot test the website until we have actually built the website but the linter and the unit tests they don't depend on the build output so for that reason we can run them before the build we can run them after a build we don't have a dependency there but in this case for the website we are dependent on the job artifacts because we're using them for something so from that perspective we always need to have the build job before here we can test the website so if jobs have dependencies between them they need to be in different stages now i have to say that there is no replacement for experimentation and trying things on your own if you are unsure if something works or not just give it a try and see how far you can go and why that pipeline fails as you have noticed so far our pipeline does require a bit of time and actually optimizing these pipelines is a lot of work so what i wanted to show you is just a very simple way on how we're gonna save a bit of time throughout this course so that we don't wait so much for each of these stages so again i'm gonna open here the web ide and what i want to do is to essentially restructure these jobs now as you can notice here the build website the linter and the unit tests they are all pretty similar they all use the same image they all need to install yarn dependencies and they just run a command after this so one idea would be to for example take the linter so yarn lint and to put it right here after we have installed the dependencies and in that case we don't need this job anymore same goes for the unit tests right we're gonna take here the unit tests and put them after yarn lint and again we're not gonna need this anymore also not gonna need a stage we can reduce this to just two jobs so this job indeed has this image which is the same as this one but doesn't have any dependencies so it's just installing surf here it's installing curl so i don't want to really combine this in a single job we could theoretically put everything in a single job but i'm just doing it now for performance purposes and in order to save a bit of time we're just gonna stick to this two different stages the build stage and the test stage and even if you're doing here a bit of testing it's acceptable now to be able to save a bit of time and as i said it's totally up to you how you structure your pipeline but this is why we are removing now this i just wanted to show you how they look like but for the rest of the course we don't want to wait so much time waiting for these stages to complete so for that reason this will be a bit easier i'm going to create a merge request and merge these changes into the main branch if you're binge watching this course make sure to take a break after a few lessons there is a lot to take in but you need to take care of your body as well so get up from time to time drink some water eat or go outside and relax your mind and trust me taking a break is so much more productive than staying all day in front of your computer if your body does not feel well this impacts your productivity and energy levels so i'm going to take a break as well and i'll see you in a bit [Music] in this unit we'll learn about deployments and we'll take our website project and deploy it in the aws cloud along the way we'll learn about other devops practices such as continuous delivery and continuous deployment by the end of this section we'll have a complete ci cd pipeline that will take our website build it test it and let it float in the cloud just a quick reminder in case you get stuck or something does not work check the video description for the course notes which include troubleshooting tips this sounds exciting let's begin amazon web services or simply aws is a cloud platform provider offering over 200 products and services available in data centers all over the world aws offers a pay-as-you-go model for renting cloud infrastructure mostly for computation or data storage so instead of buying physical hardware and managing it in a data center you can use a cloud provider like aws such services include virtual servers managed databases file storage content delivery and many others aws started in the early 2000s and its adoption continued to increase over time maybe you have heard the story when the new york times wanted to make over 1 million old articles available to the public a software architect at new york times has converted the original article scans into pdfs in less than 24 hours at a fraction of the cost a more traditional approach would have required and this was back in 2007 today many organizations can't even imagine their idea infrastructure without using some cloud components while this course focuses on aws the principles presented here apply to any other providers if you don't already have an aws account don't worry it's quite easy to create one and your personal details and click on continue since we're using aws for learning and experimenting with it i recommend choosing a personal account aws is a paid service and even though in the beginning there is a free tier that has some generous limits and is ideal for learning and experimenting you are still required to provide a credit card or debit card for the eventuality you will go over the free limit i'll go with a free basic support plan which is fine for individuals now that we have an aws account that is verified and has a payment what we need to do next is sign in into the console because from the aws console we can access all services that aws offers if you're seeing the aws management console it means that you have set up everything correctly and you can continue with the rest of the course and essentially start using aws right away let's do a bit of orientation this is the main page from where you navigate to all aws services aws services are distributed in multiple data centers and you have the possibility of selecting the data center you would like to use right here on the right side on the top of the menu i will go with u.s east 1 north virginia you can use whichever region you like just remember the one you have selected as this will be relevant later on typically the data centers in the u.s have a lower cost than others that are spread around the globe and right here you have a list of all services available you can also search for services in this search bar finally i highly recommend that you go to your account profile and enable multi-factor authentication this will significantly raise the security of your account that's about it now you can start using aws the first service that we want to interact with is aws s3 which stands for simple storage service you can find a service by simply searching for it you will see here s3 i'm gonna click on it and there you go this is aws s3 s3 is like dropbox but much better suited for devops actually for a long while dropbox was using aws s3 behind the scenes for storing files but let's go back to the course since our website is static and requires no computing power or database we will use aws s3 to store the public files and search them to the word from there over http on aws s3 files which aws calls objects are stored in buckets which are like some super container folder now you may notice that your aws interface looks a bit different than mine aws is constantly improving the ui and things may change in time however the principles that i'm showing here will stay the same so let's go ahead and create our first bucket first of all we have to give our bucket a name and the name of the bucket needs to be unique so i'm going to give this bucket a unique name so that i don't run into conflicts with anyone else who may have created the bucket with the same name so i'm gonna just give here my name and a dash and after that i will give here a date additionally you'll have here the aws region and in my case i'm gonna keep it exactly as it is and there are also a bunch of settings that we'll not look into right now so right here at the end i'm going to click here on create bucket all right so the bucket has been successfully created and we can go inside the bucket and to see if there's anything in there but at this point there are absolutely no files of course we could go ahead and manually upload files or download files or things like that so through the web interface it's possible to add files and folders but this is not what we are trying to accomplish with devops we want to automate this process so for that reason i'm going to leave it as it is and in the upcoming lectures we're going to find a way on how to interact with aws s3 from gitlab so how can we interact with the aws cloud and particularly with aws s3 in order to upload our website if you remember in the beginning we talked about how we typically interact with computers and that is through a cli command line interface so for the aws cloud we need a command line interface to be able to upload files for example and fortunately aws has already thought about that and there is an official aws command line interface that we can use there's also here a very in-depth documentation about the different services that are available and throughout the course we're gonna also navigate through this documentation because i want you to understand the process of interacting with such services and the importance of actually using a documentation and not just replicating what i'm doing on the screen now to be able to use the aws cli inside our gitlab pipeline we need to find the docker image and the best place to search for docker images is in docker hub so here inside docker hub i'm going to go ahead and write aws cli and i'm going to find here we'll see it it's under verified content if possible we always try to use verified official docker images i'm gonna take this one so all i have to do is simply copy this and then we're gonna go inside our pipeline open the web id again and let's go ahead and create a completely new job here so i'm going to call this job deploy to s3 and we're gonna add an additional stage here because we need to do this deployment after the ci pipeline is done so we have tested everything and after that if everything passes then we're gonna move to the next stage which is the deploy states and right here stage deploy and the image that you want to use is the one i've just copied which is amazon aws cli now by default this image will not really work the way we have used for example the node image or the alpine image this image has a thing which is called an entry point essentially when we're starting the image there's already a program that's running with that image and this is something that conflicts a bit with the way we're using images in gitlab so we need to override that entry point so for that reason i'm going to move this to a new line i'm going to write here another property which is called name so this is under image and this is not a list and below it i'm gonna write entry point and here i'm gonna overwrite the entry point with this square brackets and this quotes here so then comes the script part and in the script part we are essentially writing well let's test if the tool is working so typically almost every tool will have the possibility of printing out the version so the name of the tool is aws and then the version will be dash dash version now what we're interested in is using aws cli in version two this is the current version another version is version one but we're definitely interested in using version two so for example here from docker hub you can go to the tags and you can also specify a tag if we don't specify a tag we'll always get the latest version but generally it's recommended that you specify a tag in it you go with a tag so for example if i want to specify this tag i'm going to copy the name of the tag i'm going to add here column and then the value that i want to use it's a best practice to always specify a tag but of course by the time you watch this there will be newer version typically anything that starts with two should be good enough so let's commit these changes in a branch and see how the pipeline looks like now right now i'm getting an error and if i'm looking inside the pipeline to see what's going on there are some errors because the stage i have chosen in this case the deploy stage doesn't exist so the pipeline hasn't been executed so we still need to make some changes to it so here where we have the stages i'm gonna add here deploy so now we have the stage build test and deploy again i'm going to commit and see if the pipeline runs so now the entire pipeline has been successful let's take a look at the deploy job see exactly what has happened there and we take a look at the logs here we have used this aws cli image and of course we have printed here the version so we'll see exactly which version we're using i've used the tag so with that tag i'm gonna always get this version but sometimes if you don't wanna specify a tag it's still a good practice to lock the version in the logs just in case you know from one day to the next one or like one month later or something like that your pipeline doesn't work anymore at least you can compare previous logs with existing ones and see if there's any differences in the version that you have used for the various tools i'll tell you right from the start that getting this upload to work will require a bit of learning and a bit of steps so we're not gonna be able to do this right in this lecture and i get some errors just as a heads up but we'll be making progress toward that goal so how do we upload the file so let's continue editing this pipeline now as i said because you're going to need a few tries and go ahead and disable the build website and the test website jobs and this will ensure that we are only running the deployed 2s3 job for the moment until we're ready and we have tested that upload works i always like to make things as simple as possible and whenever i'm working with a new tool i want to make sure that i don't make any mistakes or that i can easily understand what's going on so for that reason i'm gonna leave aside this complexity with building and testing the website we're gonna only focus on deploy 2s3 so we have here the aws version and then what you're going to use here is aws this is the name of the cli tool then i'm going to specify the service name which is s3 and then on the service name we are gonna use copy right so we are copying something to s3 now what are we copying well for example let's create a new file from scratch just to make sure that everything is working properly so how do we create a file i'm gonna use echo i'm gonna write here hello s3 i'm gonna put it inside let's call this test.txt right so we are putting hello s3 inside this file so we now we have this file so we know exactly what we are uploading and the question is where are we uploading this where we're uploading this to the bucket that we have created now for that reason we need to have the bucket name and if you still have your bucket open you'll be able to see here the destination now this is my destination you will have a different bucket name of course i'm going to go ahead and simply paste this so it's s3 column forward says forward slash the name of the bucket then we have to specify the name of the file so in this case i'm going to keep the same name it's going to be test.txt so we are taking this file that we have created inside gitlab we are uploading it to this bucket and the name of the file will be the same in this case let's give it a try and see how it works because we have disabled the build and the test stages essentially we now only have the deploy to s3 so this makes the entire pipeline execution much faster but let's take a look why this failed so what we did we're still printing the aws version and then the last command that we have executed is this one and i cannot stress how important it is to go through the locks and try to understand what has happened so it says here upload failed trying to upload this file to this location and it says the error essentially is unable to locate credentials the thing is how is aws cli supposed to know who we are i mean if this would work as it is right now we should be able to upload files to any bucket that belong to other people or to delete objects from buckets that belong to other people so we need to tell aws cli who we are and this is what we're going to do in the upcoming lectures and in order to have the entire context in terms of this we have to go through a few stages of preparation existence right now there's something about this pipeline that i don't like and that is because we have something inside here that may change later on and i would really like to make this configurable and not to have this information inside here at the beginning of the course we looked at the defining variables and i've shown you how you can define some variables within the pipeline there's also another place where you can define variables and for that i'm going to copy this value and i'm going to go inside the settings so from your project you can go to settings here on the right hand side and what you're gonna do here is select ci cd and right here in the middle you will see variables i'm going to expand this so that we are able to add variables now typically here we tend to store passwords or secret keys but i'm going to use this bucket name as an example for some important features you need to know about so let's go ahead here and click on add variable and i'm going to name my variable aws underscore s3 underscore bucket and the value will be exactly what i've copied from the pipeline now there are some additional settings here i want you to pay attention to there are two flags one of them is typically enabled by default and that is protect variable this flag alone is the single cause for a lot of confusion and a lot of wasted time for people who are just getting started with gitlab i just wanted to point this out if this flag is enabled this variable that we have defined here will only be available to protected branches so for example the main branch that is a protected branch that information will be available in our current branch i created here a feature branch for working with aws cli if i leave this flag on and try to use this variable there it will not work typically the idea here is let's say you have two environments you have a test environment and you have a production environment you may have different credentials for each systems for securities reason now you typically want to keep the protected variables for the main master branch which deploys to production in this way you ensure that nobody has the credentials to accidentally deploy to production from a different branch so this is a security measure now we are using branches and we are using these services directly so at least at this stage we're going to go ahead and disable this flag because we don't want to protect this variable because it will not be available in our pipeline execution as we are running this on a branch the second flag is disabled by default this is something that will be masked in order to mask a variable this is particularly useful for password so for example if you accidentally try to print one of these variables in your code it will not appear there it will just appear like a must masked out with like simply stars it's okay to have for example usernames or other things like this packet name we don't need to mask this because it's not really a secret but if we had here a password for a password it would make sense to mask this so for this variable i'm gonna disable the protect variable flag and i'm gonna disable the mask variable flag many people think that if you don't have the protect variable flag the variable is somehow public or unprotected that's not the case it's simply available for all the branches inside a project and at this stage that's totally fine i'm gonna change this a bit later on but at this point that is fine i'm gonna go ahead here and add the variable you will be able to see it here being added and of course if you need to make changes to it you can go ahead and make changes from here going back to the pipeline instead of having this value i'm going to start here with the dollar sign aws s3 bucket so the name has to be exactly as you have defined it and don't forget to put the dollar sign in advance because this is what makes this a variable we can run this pipeline again and see how it looks like see if we notice any changes now if we're looking at the logs we should be able to notice something interesting so what is interesting about this is the following we have the same command but now inside the command you notice this variable so when we're writing the command the command that's being locked here this doesn't change it will still show you with the variable so it may seem that it's not resolved but actually if you're looking here at the error that we're getting you will see here that the variable has been resolved an indication that the variable has not been resolved is for example not seeing this text at all so i invite you to play around with the protected flag and to run the pipeline again to see what's happening and you should be able to see here that after s3 you'll got you will get three forward slashes you will not be able to see your bucket name here so let's go back to what we wanted to achieve to actually upload this file and we're still getting this error unable to locate credentials so what should we do should we put our aws email and password somewhere in these variables so that aws can locate them well you're getting pretty warm with that yes essentially we have to provide some credentials but those credentials won't be our regular email and password because that will be highly insecure whenever we're using a service we try to give limited access to that so in order to give this limited access to services in this case we only need s3 there is also a service that manages that so if we go back here to aws and search for services the service that we are interested in is iam or yum and this is essentially an identity manager service that aws offers let's go inside it and see what we can do so first of all if you didn't activate multi-factor authentication you're going to get this warning and i definitely recommend that you do that this is just a testing account for me which will be very short-lived so for that reason i didn't do that at this point now here from this service we can create new users essentially we want to create an user that will be able to connect to s3 so from the left hand side here let's go to users and i'm gonna add a new user so we have to give here this user a username can be any user typically i'm using something so that i know why i created it so i'm going to call this gitlab i know exactly that this user is intended for gitlab and we have to select an access type now what we are interested in is also called programmatic access so we're going to click this and you will see here that this enables an access key id and secret for aws api cli and so on so we're interested in having this for aws cli so this is why we are creating this user and we are enabling this programmatic access we don't need a password we don't need this user to have access to the management console like we do so for that reason that's sufficient let's go to the next part which is the permissions so essentially the permissions tell what the user is allowed to do and permissions in aws i'm going to say it's not an easy topic so i'm going to go the most straightforward way i'm going to attach an existing policy so i'm going to go here to attach existing policies in this search bar i'm going to select s3 i'm going to search for s3 you're going to get some predefined policies so policy is essentially like a set of rules that we can apply to a user and the set of rules that we're going to use here is amazon s3 full access so essentially we're gonna give this user access to everything that is aws s3 so we should be able to create new buckets should be able to delete files and so on so a bit more than what we actually need for this use case but just to simplify things i'm gonna give this full access to the user but of course it's the topic on its own let's go to the next page we see the tags we don't need to add any tags here and then we'll be able to review what we're trying to do and we can go ahead and create this user so now we have successfully created a new user and aws has created something for us and that is an access key id and a secret access key to put it in plain english this is like a username and a password you will see here that the password is not even displayed so what are we going to do with this information so first of all let's go ahead and copy the access key id go inside gitlab again going to go to settings ci cd and expand here the variables and let's go ahead and add a new variable i'm going to start here writing aws and you will see there already some predefined things that pop up so one of them is aws access key id this has to be written exactly as you see it here there's something that's different about this away cli will not be able to pick up this variable it will look exactly for this variable name and will automatically pick it up without us doing anything else so it has to be exactly as it is i'm going to paste here the value of course this is as per my account so you'll have a different value and i'm gonna disable the protect variable flag because we want to be able to use this in a branch as well i'm gonna go ahead add this variable and going back to aws i'm gonna also show here the secret access key of course i'm gonna delete this right after recording but i'm showing you just to know exactly how it looks like it's also important that you don't add any spaces or anything like that so i'm going to copy this secret access key and go here add a variable and i'm going to start typing aws secret access key and i'll paste the value here i'm gonna disable the protect variable flag and click on add variable finally there's still another thing that we need to configure and that is our region so again i'm gonna add here variable start typing aws and we have here the default region so when we're setting this variable all services that we use will be in this region and we don't need to specify the region all the time aws know exactly where this bucket or any other service is located and we cannot simply write here anything we have to write exactly as aws uses it internally so what do i mean by that well let's go back to the aws console and you will see here the s3 service and recently visited so this will make our life easier in terms of grabbing this information and for example for my bucket this is in us east north virginia what we're actually interested in is this code so i'm going to copy this information and paste it here so this will be the default region and again i'm not going to protect this variable and i'm going to add it here what i forgot to do in terms of the secret access key is i haven't masked it so it would be idea to go back to it click on this edit and you will see here this flag because this is essentially a password and i click here this flag to mask it and update it and then you also have here an overview like which one of them is protected which one of them is masked and of course if you need to inspect some of these variables you can go back and click on them and you will be able to see the value but list on this overview here they are hidden there's also this possibility of copying the value without revealing this can be useful as well or you can click here on reveal values and they will be displayed all right so it seems that we have everything in place now at least we have these credentials and because we named them the way we named them aws cli will automatically pick them up so there's actually nothing that we need to change in our pipeline so for that reason we can simply go ahead and rerun the same pipeline once again i'm going to go here to ci cd pipelines go here to the job that failed and here i can simply click on retry this will start the exact same job once again all right so this already looks so much better the job has succeeded we don't see here any errors aws cli is telling us which files have been uploaded so we could jump into s3 and take a look inside our bucket to see if we have any files here you will be able to see here our test.txt file right inside here say when it was modified and so on we can download it if we need to but essentially this has been the first major step in terms of interacting with the aws cloud all right so we have made some progress and we have managed to finally upload this file but if you remember we have an entire website with a lot of files so you know going file by file and uploading that is not really the way to go so i can make sure that whatever we have inside the build folder it's going to be synced to our packet now if you remember in the beginning i mentioned the documentation the reference for the aws cli essentially for any service there is here documentation there is here an additional command so we have used aws s3 as the name of the service so in this list of services here you will find s3 somewhere so here we go this is s3 so i'm going to click on it i'm going to be honest with you in the beginning this documentation will look very scary but if you take a bit of time if you have a bit of patience and you go through the documentation this is the way to really master the cli commands that you will be using and this doesn't only apply to aws cli applied to any tools out there so reading the documentation looking at all the parameters everything that you can do with this is really super helpful so for s3 you will see here a lot of documentation but also at the end you will see here a list of available commands so we have used cp for copy so we've copied this file and it's also definitely possible to copy an entire folder but i wanted to show you also different command which is sync now typically syncing something means that whatever we have on one side we have on the other side as well for example to ensure this with copy we first have to remove the folder just to ensure that we really have everything so for example if we added the file or we have removed the file we don't want to have those files on s3 anymore if we don't have them on our website anymore so for that reason using something like sync which ensures that all the files are in sync does make a lot of sense so we can go here on sync and we'll tell right from the description it syncs directories and essentially it ensures that whatever we have there on one side is also available on the other side so how do we use this let me demystify this and take a look at our existing pipeline that we have here so first of all we're going to give up on this test file that we have uploaded here this was just for testing purposes and to make sure that we have everything set up correctly so we're gonna use here aws s3 so the command will be sync so what are we thinking we're syncing the build folder so instead of that file we're going to have an entire folder and the question is where are we syncing this we're syncing this to the bucket so we're going to remove this part here because we are going to put it directly inside the root and additionally we're going to also add a flag so if you're looking here through all the options that are available one of the options should be dash dash delete it will essentially ensure that if we delete some files during our build process which existed previously in previous builds that are also deleted during the sync so if i had a file in the build folder i synced it in the last build then i later removed that file i want that one removed from s3 as well so i'm gonna add here delete so this essentially should be enough to upload our build folder now in order to get this to run of course we have to re-enable our previous jobs so now build and test are enabled gonna commit these changes and see if this command worked and the pipeline is now successful and we're most interested in this last job that we have executed and now if we're looking at the logs we'll see exactly which files we are uploading so we'll see here uploading this uploading that so these are all the files that we have inside the build folder what's also important to notice is you notice here this delete so this file is then deleted because it no longer exists in the build folder it actually never existed in the build folder but sync detects that this file doesn't exist there and says okay if it doesn't exist in the build folder it doesn't make any sense to stay on s3 and goes ahead and removes it so let's take a look at the s3 service i'm going to refresh this page and take a look to see how it looks like so now we are able to see all the files that we are actually interested in there all have been uploaded here and we made a very important step towards hosting our website in the aws cloud now currently the files that we have uploaded here they are not publicly available whenever we create a bucket by default it will be private to us so nobody else external has access to it actually that's kind of like the normal use case you don't want the files that you're putting here to be available for everyone but of course there are use cases when you want to make these files accessible to anyone else and actually quite a lot of companies are using s34 storing files that they offer for download because it's so much cheaper than hosting them on their own website now in order to make this bucket publicly accessible we need to change a few things and we're going to start here with properties and actually what we are definitely interested in is enabling static web hosting so again i'm here in my bucket inside properties right at the end there's static website hosting i'm gonna go ahead here click on edit and we're going to enable static website hosting and the hosting type will be host a static website and there are also some settings here that we are gonna input websites typically have an index page which is a start page and also they have an error page for this application that we have the index page and the error page are the same and this is the index.html file if you look inside the bucket you should see the index.html file so this is exactly what we're going to have here for the index and the error so i want to save these changes and we're still under properties and if we go back here at the static hosting part you would see here that now we have an address so now our website is hosted on aws we could also get the domain and point this to this address but for what we're trying to do right now this is board enough so we can go ahead and click on it and in the beginning we're gonna get this error and there's still a few things we need to configure don't worry about it just wanted to point out the address and where you can view it so let's see which other settings do we need to make i'm gonna go here to permissions and you will see here the first thing that appears is access bucket and objects not public whatever we have here in the bucket is not public so this is why even if we have enabled static website hosting we still cannot access this information so let's take a look at how we can enable public access i'm going to click here on edit and i'm going to disable this block all public access and save the changes and again because this is like a major thing abs really wants to make sure that we know what we're doing so i'm going to enter here confirm i've been going to the website we can try a refresh and it will still display the same error page here so there's something that's still not working properly what we need to do in addition to what we did here with in regards to the public access is a bucket policy now essentially we need a policy for the objects that are inside the bucket i want to get too much into the details but essentially we need to write this policy so we can go ahead here and click on edit this is like policy generator now what you see here is json and this is the format in which this policy will be written here there are a few things that we need to change about this policy so essentially we can go ahead here and add an action so let's search for example for s3 i'm going to select hit here and the permission that we are looking for is essentially get object i'm going to click on this get object but additionally there are also a few other things that we need to change here just to make sure that everything is working properly and this changes will be made in text but you also find the template in the course notes just in case something doesn't go well you will have the possibility of using that so essentially i'm gonna give here a name and i'm gonna call this public read this is just a name to identify this policy and the principle here we need to be put between double quotes and i'm going to put here this star and then the action will be get s3 object that's allowed the effect is allow and resource we also need to specify the resource and the resource is our bucket i'm going to copy this name it's essentially the address of the bucket and i'm gonna put this here also in quotes and i'm gonna add here very important a forward slash and this star which essentially means that everything that is in this bucket can be retrieved get object it is allowed so this is essentially our policy for public reading all right i agree it's not so easy for beginners but let's hope we haven't made any errors here and that it will work as expected i'm gonna click here on save changes to apply this policy you will see here warnings all over the place you will see under the bucket in red publicly accessible access public so aws is really trying to tell you hey watch out uh if this is not what you intended or this is not what you want this bucket is publicly accessible so really really make sure that you know what you're doing but for us it's fine this is what we wanted to have and now by refreshing this page we're getting the website that we have created and now congratulations really this is absolutely amazing we created pipeline that now takes every commit that we make and deploys it in the aws cloud we take it through all the stages but we only have to make the change and the entire process is automated so if you manage to follow along up to this point wow i'm sure that you have learned so much and trust me we're just getting started there's still so much more to learn and i'm super excited about the upcoming steps now let's go back to the pipeline that we have designed now and i'm still here in an open merge request and if i'm looking at the pipeline i have here all the three stages so the build the test and the deploy now if you think about it this really doesn't make a lot of sense and what doesn't make sense is to deploy to s3 from a merge request or from a branch now if we think as s3 being our let's say our production server or production environment then it means that anyone who opens a merge request and tries out some small changes if the build and the test pass this will automatically get deployed to production and actually this is not what we want we want inside a merge request or as long as we are in a branch we just want to run the build and test essentially to simulate the execution of the main branch and then only when we merge this we want to run the deploy so we want to deploy to production so in order to achieve this we still need to make a few changes to our pipeline now this is how the configuration looks like at this point and the changes that we need to make are here in the deploy to s3 job now how do we exclude this job from this pipeline which is running on the branch and to ensure that it only runs on the main branch gitlab has this feature which is called rules now rules allow you to create really complex conditions and i won't get into those but we're gonna use a relatively simple condition which will check if we are on the main branch or not now in order to set up rules we're going to go to the deploy to s3 job and what i'm going to write here is rules this is a completely new keyword and here we can define a list of rules now we're going to add only one rule and this will be a list so you notice that i'm starting a list and then we have here an if column and now after the if we can specify a condition so in a condition we're typically checking if something equals or does not equal something else so in this case you want to check if the branch we're currently at so we have here something i don't know where we're at equals so we're gonna make two equal sign the main branch right so if we are on the main branch only then run this right now we don't want to put here something hard-coded so we cannot know in advance like which is this branch name where we are currently evaluating this condition so in my case i mean feature deployed to s3 or something like that so it doesn't make sense to add that there to to check this it needs to be dynamic so in order to have something dynamic we need to use some variables now luckily gitlab comes with some predefined variables and one of these variables is ci commit ref name so i'm going to search here for ref name you will see here so see i commit ref name this will give us dynamically the branch or tag name for which the project is built so we can use this as a variable and i'm going to write here dollar sign this one so this will be evaluated to our current branch now with any of these variables if you're not sure what they do how they look like simply use here echo and just keep something like this for debugging purposes in your pipeline until you get familiar with the values that you're having i'm going to remove it because i don't need it but for you definitely recommend use echo to inspect the different values that you're using if you were on a different branch this would be something else so if this equals main then we're gonna essentially run this job otherwise it will be excluded from the pipeline still having something hard coded here is also not something that we like doing yes we could keep here main but there are also variables that can handle this so another variable that we can use is default branch it should be ci default branch and this variable will give us dynamically give us the name of the default branch so if later on for whatever reason we decide to switch from master to main or from main to something else then we don't need to worry about these pipelines because the default branch will automatically get updated in this variable and then we can use it directly here so again i've added here another variable which is the ci default branch so now everything is dynamic so this rule makes sure that when the current branch equals the default branch this code gets executed in our case right now for maine will have this job in other pipelines this job will get excluded not be part of the pipeline so i'm going to commit this and i'm going to take a look at the pipelines after this so what i invite you to do is to go to cicd pipelines and now our pipeline only contains two stages so we have the build and the test stage the deploy job was completely removed i can also inspect the merge request and take a look at what is going on here and also i see that this pipeline has been updated it contains only build and test so at this point i would say okay i'm pretty confident that this functionality is working properly gonna click here on merge and also let the main pipeline run so again if we're looking here cicd pipelines we will now see that the main pipeline has started so this is on the main branch and now this pipeline contains the three jobs required so we have build test and deploy so now the main pipeline is also working we have deployed again to s3 but how do we know if our website is still working again we could go to the address again hit the refresh button check again if it's working but again that's not really the point of automation we want to make sure that the entire process is automated so if this is a worry that we have like is the website still working that we have deployed it's probably time to add another stage and to test after this deployment if everything works correctly on the website or at least as much as possible so let's open up the ide and take a look here at adding another stage so we already have here build test deploy so probably the next stage should be let's call it post deployment so this will be after the deployment and then it's probably time to also add another job i'm going to call this production tests and i want to make sure that the stage is the right one let's post deploy and what i'm proposing is to simply run a curl command so that's pretty similar to what we did here so simply going to go ahead and copy this of course this has to be under the script block and there are a few things that we need first of all we need a docker image that has curl and one option would be to search again docker hub for such an image but essentially any docker image that has curl would be sufficient so i can go ahead here on docker hub and search for curl and what i've used in the past is curl images curl but probably also one of these verified content images is probably just as good so the address to this image is this one so this is what we need to do without docker pull so we have here image and paste remove this docker pull that's the name of the image and because curl is such a generic tool don't need to specify a specific version what we're doing is pretty basic and i'm pretty sure it's not going to change very soon the other thing is the address right so the address where we have deployed this and this is available in s3 so we've been going back in s3 to our bucket looking here at properties and right at the end this is the address right so can go ahead and copy my address go back to the editor i could paste it here right but again we had a discussion about like not having you know things that could change later on inside our pipeline or have it all over the place so again let's go ahead and define a variable now this time i'm gonna define a variable within the pipeline itself that's also totally fine and also way on how to do things i'm gonna define here variables block and let's call this variable app base url of course you're free to name it as you wish column and then i'm gonna paste here the address to it so this is now the full address so it's all i need is this this is the variable name and here instead of writing something like this and simply go ahead use curl app base url i'm going to search for react tab because this is what we have actually in the body of the index html that are grabbing this should be enough at least to test that what we have on s3 is still reachable that at least this text is still available there additionally we need to make another configuration because otherwise as it is right now this will also run inside the branch so i'm going to copy this rule that we have here on deploy to s3 same rule is valid here we want to run these jobs only in the main pipeline so i'm going to go ahead commit these changes and merge them into the main branch and i'm a pretty confident that this pipeline will work without any issues so there's no additional review that i need at this point i'm going to simply merge when the pipeline succeeds and i'm going to grab a coffee until this is merged so after a few minutes here in the merge request we'll see a few things so first of all we have here the pipeline for our branch with two stages we have to build and the test that's it after we have merged this then the pipeline for the main branch has started and this pipeline then has four stages build test deploy and post deploy so essentially the stage that we have added right now and this contains the production tests let's take a look at them to see what has happened here so we have executed this command carl was able to download the website has passed this information to greb and grab a search for react app in the text and somewhere here was react app so for that reason everything worked out successfully so we know that website is still there it's still working so that's perfect so let's take a minute and take a look at what we did so far so this is our main pipeline right this is the pipeline that now deploys to aws s3 and we have the build and the test this is what we essentially call continuous integration right this happens also in the merge request but also here we are rechecking that our build is still working and let those tests are executed here the second part this is the cd part in ci cd now cd can mean continuous deployment or continuous delivery i'm going to explain a second what that means right now we're kind of doing continuous deployment in the sense that every commit that goes into the main branch will actually be deployed to production we have automated the deployment to s3 and we said that you know our s3 bucket it's hosting our website and is essentially our production environment that the whole world can see now this is a very simplistic view on cicd now quite often pipelines also have a staging environment or a pre-production environment essentially if we make changes to our deployment to s3 and something doesn't work anymore we're not really testing this before and the main branch may still produce some errors and again we come back to the same problems we had when we did ci when nobody else can work on a project anymore so for that reason it kind of makes sense also to add another environment at least if the environment if there are some issues with the environment before production and the main pipeline breaks at least we haven't affected the production environment so it's still good right now we're just deploying anything and we're testing afterwards what is a staging environment so a staging environment is essentially a non-production usually non-public environment that is actually very close to the actual production environment right so we want to keep things as similar to the production environment as possible quite often we use automation to create these environments and to ensure that they're really identical we haven't done that we have used manual work to actually create the s3 bucket and we did some automation afterwards but ideally we would create this entire infrastructure automatically so the idea is to add a staging environment as a pre-production environment essentially we want to try out our deployment on pre-production before we go to production so in case it fails the production environment is unaffected the main two concepts i want you to be aware of is continuous deployment and continuous delivery now with continuous deployment as i said every commit that goes through the pipeline would also land on the production server in the production environment with continuous delivery we're still using automation and every successful merge request again triggers the main pipeline which leads on automatic deployment to the staging environment however we don't go directly to production there is like a button that you click in order to promote a build from the pre-production environment to the production environment so these are like the differences between continuous delivery and continuous deployment in upcoming lectures we're gonna get a bit more into them and you're gonna better understand like what they really mean now it's time for you to practice a bit more and at this point i have an assignment for you and the assignment is essentially you building a staging environment and i want you to follow essentially the same process as we did when we created the production environment i also want you to make sure that you write some tests that ensure the environment is working as expected please pause the video and take this as an opportunity to practice along more both with aws and with building the pipeline i hope you had a good assignment and that everything is working properly in this video i wanted to show you how i would do something similar so first of all i'm going to start with creating the bucket and i'm just going to copy this name that i already have click here on create bucket and essentially i'm going to call this staging essentially the same name but i'm just adding staging to it and in the beginning i'm going to leave all settings as they are and in the upcoming steps i will essentially ensure that this bucket is public so they are the same steps as before and i'm not going to go again over them so after a few clicks i now also have this staging bucket also as public and i've enabled website hosting which means you can go inside the bucket you can go here to the properties and then right here at the bottom i will see this address which we will need later on first of all let's begin with the name itself so i'm going to copy it and i'll have to go inside gitlab and save it in a variable so here in gitlab i'm going to go to settings ci cd go here to variables what you'll notice here is that we already have an s3 bucket of course that's kind of an inconvenient at this point so we'll need to figure out exactly how we can manage this what i'm going to go ahead and also call it aws s3 bucket i'm going to call it staging i'm going to add this thing that will make it different essentially now we can protect this variable because we don't want to deploy from a branch anymore so it definitely makes sense to have this protected masking it doesn't make any sense at this point all right so at least now i have this i'm gonna copy the name so that they don't forget it and let's go inside the project open the web ide and start making some changes to the pipeline now first of all because this is a pre-production environment we need to define another stage before we deploy to production let's call this deploy staging and of course what we need to do here is to define a new job now most of this job is pretty similar to what we already have here in terms of deploy to s3 and also in terms of the production test that we have here so it kind of makes sense to copy everything that we have and paste it here again and let's call this deploy to staging and we can call this deploy to production just to have like more clarity in terms of where we're deploying and what we're doing also we need to have the right stage so the stage name is deploy staging apart from this we are using the same image we still want to run this on the main branch and the only thing that we need to adapt here is the bucket so it's going to be a little s3 bucket staging and again the test that we have here they can be called staging tests we'll see here that you know the editor is complaining about the duplicate key so that's a good thing because you know exactly what we need to fix so then we have the staging tests and also here where we need another base url so that's also something that we need to consider i'm gonna add staging to this one as well and from aws i'm gonna copy the url paste it here you can notice that we have two different urls so let's double check everything we have added a new stage deploy staging which is before deploy or we can even call it deploy production right so we have even more clarity just need to make sure that we are adopting this everywhere so we have here deploy staging and then we have deploy production but then we have post deploy now we also want to have this testing after the staging so we need something like test staging for example and we can call this test production right so we're deploying to staging we're testing the staging if both of them are successful then we go ahead and deploy to production and after that we test again the production so let's make sure that we have the right stages otherwise we're going to get some issues so deploy production yes test production test staging deploy staging we have the right bucket name here we don't have the right url we need to grab that as well for staging all right and all of a sudden our pipeline got a bit bigger but no worries one last check deploy staging test staging deploy production test production deploy staging is the stage we're using the staging bucket paging tests using the staging url test staging deploy production test production all right i'm going to go ahead commit these changes and we'll see how they work in the main branch and a few minutes later after the main pipeline has also completed we'll see now that we have here a bunch of stages so after the tests then we're deploying to staging we're testing staging deploying to production we're also testing production of course we could go ahead and combine the staging so for example because our tests are very simple there's nothing that prohibits us from just putting this simple test inside the deployed job itself so that will save us some stages here but i just want to demonstrate like how how longer pipeline essentially looks like and what's the meaning of the stages so most importantly now if something happens with the deploy stage the pipeline breaks here production environment is unaffected we don't have to worry about it so we have time to fix any problems in our deployment if you're looking again at the pipeline well you know all the staging environment has really created a lot of duplication and all these variables with the bucket now we have ideal ss3 packet with staging we also have the space url and the staging url obviously we have now two environments but we haven't really figured out a way on how to properly manage this environment luckily there is a functionality that gitlab offers for managing these environments going inside the project you will find here on the left hand side deployments and inside deployments there's this optional environment what is an environment staging is an environment production is an environment wherever we're deploying something that is an environment and it really makes sense to have these environments defined somewhere and to work with the concept of environment instead of fiddling around with so many variables that's really not the best way on how to do that so what i'm going to do here i'm going to create a new environment and i'm going to call this environment production i'm also going to get the environment url from the pipeline so let's remember this is the production url so let me get that and i'll save it here and going back to environments i'm going to create a new environment and this will be staging again going back to the pipeline copying the name of the environment and putting it here what gitlab will also do is to keep track of deployments on different environments now currently we don't have any deployments but we can directly open the environments from here so in case we forget where the our environments are we can easily click on this open environment link especially for non-technical people that are also working with gitlab it's easier for them to see exactly where is the staging environment what's the url i don't have to ask someone it can just directly go to their respective environment and see that respective version so very very useful feature in terms of environments but what does it do for our pipeline well i'll tell you what this will do we're going to remove this from here right i'm going to go ahead first of all this is out i'm not using this anymore second of all i'm gonna go back to settings ci cd and expand here the variables and i'm gonna go here to this aws s3 bucket i'm gonna click here on edit and this is essentially our production environment this is for production and now we can also give a scope so we can tell gitlab hey this variable is associated with production and not with something else so we can essentially scope it and of course in this case because we're using it in main only we're gonna also protect this variable i'm gonna go ahead and update here and the same goes for this other variable right now we don't need this staging added at the end we can just call it aws s3 bucket and we're going to select here the staging environment scope and update the variable so now we have two variables that share the same name but now they belong to different environments now the idea is to do the following we need somehow to adapt our pipeline and let's begin here by deploying to staging first of all we have this aws s3 bucket so i'm going to remove here staging from the end and additionally we need to tell gitlab hey this deployed to staging this is associated with the staging environment i'm going to say here environment staging there are still a few things i would like to change first of all the entire pipeline right now is too long we still have this staging test and production tests and because this is just one curl command we can just move it away from here i'm gonna remove this stage all together and all i want to do is to add it here on deploy to staging so essentially right after you have deployed we're also using this curl command it's pretty similar to what we're also going to do on production but as you probably noticed already this app base url doesn't exist anymore we have removed those variables and we need to find a way to get our environment url and luckily again gitlab to the rescue there is a variable i'm going to go ahead and search for environment and i'm going to have here the ci environment name environment slug environment url so this is what we're interested in the environment url i'm gonna copy this and i'm gonna put it here in the curl command so i'm gonna send the curl command directly to the ci environment url same goes to the production deploy to production i'm gonna add it here so in this situation the production tests they also don't make any sense so i'm gonna remove them and all these extra stages that we have here test production it's not going to be needed test staging it's not going to be needed so now we have a much more simpler pipeline but we're still achieving the same thing most importantly we are using these environment we still have an error inside here but i just gonna commit these changes let the branch pipeline run merge these changes into the master we're gonna take a look at the main pipeline to see which errors we still have there now if we're looking at the main pipeline you will notice something interesting deploy staging has passed it's working perfectly you will see here curl ci environment url it's fetching the page it's passing however if you're looking here at deploy to production it's all of the sudden complaining it's saying here that aws is three bucket parameter validation failed invalid bucket name so what's going on well the following thing has happened we somehow did not associate this with an environment or the environment is not correct so we need to go back and see why this job does not have access to this environment variable i'm going to go ahead here and take a look again at the configuration to make sure that the configuration itself of the job is correct what do we have here or we have deployed to production but as you can notice i haven't defined an environment right so i have defined an environment for deploy to staging but i haven't defined an environment for deploy to production by looking here at the variables you will see that this aws s3 bucket is now scoped only for production and because this job doesn't say anything about production this environment is not exposed in the production job so to be able to solve this we also have to add here environment production i'm going to go ahead and copy this and the environment will be production and of course unfortunately this time we have to go again through all the stages the merge request and committing this in the main branch and just a few minutes later the entire pipeline will then succeed so we have deployed the staging we have deployed to production everything seems to be working fine we can take a look at the jobs to see what they're doing and we'll see here that everything worked out without any issues and of course we can also go here to deployments environments and here we are able to see the staging environment and the production environment and we can easily open them and we can see like what was deployed when was it deployed so it really keeps track here of the different environments and the deployments that took place so we see here on production there's only one deployment in staging we have two deployments that we have committed so we're essentially keeping track of what is going on on this environment you know what i'm still not super happy with this pipeline i mean generally when i see that a lot of things are repeating but it kind of makes me wonder if there's not a better way so if looking here at the deployed staging and we're looking at deeply production well essentially at this point because we have used all these variables these jobs are almost identical the only thing that is different is the different stage different job name and we're specifying the environment but the rest of the configuration is identical and yes the answer is yes we can simplify this even more so essentially we can reuse some job configurations so i can go ahead here and i'm going to copy this and i'm going to go here and i'm going to define essentially a new job i'm just going to call it deploy this will be a special kind of job it will be a job that has a dot in front of it and as you remember we have used the dot notation to disable jobs by having this we can essentially have a configuration here we don't care about the stage right we don't care about the environment so this is something that is not in common with the rest it doesn't even have to be a valid job configuration we have just put here the parts that are really important for us and then in these other jobs we're going to keep what we need so what we need we need a stage and we need the environment and of course we need the job name and this other part here we can simply write something like extends so this will be the extents keyword and what are we extending we are extending dot deploy that dot in front is very important so don't miss it and the same goes for deploy to production let me remove everything i'm just going to keep here the extents and make sure it's properly indented so deploy to staging deploy to production we now have two simple jobs here essentially the deployment part is the same this also gives us peace of mind because if we're making changes to the deployment part we're only making changes here so if we make a mistake there the chances are we're going to be able to catch it before it goes to production let's commit this give it a try see how it works and it should essentially work the same as before and after a few minutes if we take a look at the main pipeline we'll see that it works as it should work it looks as before and this is exactly what we expected we didn't want to see anything else but the pipeline working but now we have a much simpler pipeline configuration and now feel it is time for another assignment now let me tell you my idea now we have this pipeline and we're testing with the curl command that that specific text is on the website and that's fine it ensures that the website is still working we currently we haven't really made any changes to the website itself we haven't added any text or removed anything how can we make sure that you know with what we actually built here actually lands on the staging and in the production environment and it's not somehow a cache or something old there maybe you know maybe the deployment is not even working and you're getting the older version and we think that everything is fine so here's my idea what if we add another file in our build let's call it version.html for example and inside there we put a build number something that is all the time different with each build so for example it will increment from one two three four and so on so that with every build we can identify to build number and we can check which version has been deployed to staging and production how about this idea and in order to get you started i'm going to show you how to get this information inside there how to get this build information this dynamic part maybe this is something that right now you don't know exactly how to do it so no worries i'm going to get you started but the rest of the assignment will have to figure it out on your own trust me you already know all the concept needed in order to implement something very simple so i'm going to go ahead here and define a variable and i'm going to call this variable app underscore version so essentially let's say that this is our application version so want to have something like 12 13 14 and so on now again when we're thinking about something dynamic we have to think back to the list of predefined variables that gitlab offers right so on this list there are various variables that we can use including something that is related to the pipeline id so if we're looking for pipeline underscore id find here some pipeline ids that we can use so these are variables that of course change all the time they are injected by gitlab so we can use them in our jobs so we can get here like an instance level id this typically includes like very large numbers because there are a lot of pipelines on a gitlab instance but you can also get like a project level id so this will be something we can easily relate to it's and we're going to see it increment all the time so i'm going to get this variable and i'm going to use it here so i'm redefining this because i want to give it a bit of meaning i could have used this directly somewhere this is really your solution but if i'm using here app version i'm very clear that this is the application version that we're using here not just some random id so that's it i'm gonna let you do the rest of the assignment so just to recap here somewhere in the build add a new file which is called version.html add that new file put this app version inside that file and then when we're testing our deployments including staging and production so here create a new curl command that will check that that application version is actually available on that environment okay so i hope that you have managed to solve this on your own i already feel i gave you like quite a lot of hints but just to make sure this is how i would solve it so let's go here to the build website apart from all these things that we're doing here what i want to do additionally is to create this file so again in order to create this file we have to first think about the name so it's going to be inside the build folder right this is this is what we're deploying and the name will be version.html and what we're doing is we're taking this application version so in order to get the application version inside the file gonna use echo i'm gonna print the application version and then eventually again redirect it to the file so this is enough to create this file to put it inside the build folder which already exists build yarn build has created that and because this is dynamic it will be available there the next step is part of the deploy template so i'm working directly in the template here so we can essentially duplicate this command we're going to the environment url and we hope that we don't have a forward slash already gonna check that quickly i'm gonna go here to deployments environments i'm gonna go to staging click on edit i'm noticing here i don't have a forward slash so that's already good and i presume that the production environment is also similar but just to check i did i don't have a forward slash okay so inside our configuration i'm gonna add here forward slash i'm gonna write here version and what are we looking for well we're looking for the application version and because the application version is simple number we don't need these quotes here so we can just write grep app version and that should work so already as part of the merge request as soon as the build has completed i want to go inside this build website and i'm going to take a look here at the artifacts to see if this artifact contains this version.html file so i'm able to see it here version.html it has a size that is not zero that's a good thing and we can also download all the artifacts or just look at a single file so after clicking on this somewhere here on the screen it will open up this web page essentially so this is the build number 35. it's pretty small but you should be able to see it on screen as well so this is part of the file so it's already looking good when this much request is completed i will also take a look in the main branch now the main pipeline is also working makes me very happy because i now have more confidence in what is going on in this pipeline i have confidence that if i'm making a change that change actually gets deployed to staging and production and we have this additional check then in place to ensure that all right so let's recap a bit what we did the first part is the ci part continuous integration the second part is cd continuous deployment now we have a more realistic continuous deployment pipeline we're testing something and then we're first deploying to staging making sure that on staging everything works and after that we deploy to production but what is a continuous delivery pipeline this is exactly what i wanted to show you essentially a continuous delivery pipeline is just a pipeline where we don't automatically deploy to production essentially what we want to do is add here a button and only when we are sure that we really want to make that change to production we can click on that button and make that change so let me show you how to do this i assure you it's super super easy so inside the pipeline configuration the job that is affected by this change is deploy to production so here on deploy to production i'm gonna add a condition and the condition is when and we're gonna write here manually it's actually when manual not manually so this will tell gitlab that we only want to run this job manually so we need some manual intervention gonna commit these changes and we're gonna take a look at the final pipeline now this is the final pipeline and i want you to notice something for deploy production this job now looks a bit different right if you look at deploy staging click here on deploy production we'll say here this is a manual action and we have here an additional button right if we also go to our deployments and we take a look at environments we'll be able to see here like what we have on each environment so you'll see here if we're going here on staging this is the staging environment and of course here inside the staging environment we can go ahead and write here version.html so we're getting here on the staging environment for version 40 right so this is the version on the staging environment and if we're opening up to production environment it looks absolutely the same but if we look at a version.html you'll see that this is a different version right on staging and on production we now have different versions because our pipeline has not deployed to production yet so in order to deploy to production we have to click this button and only then will the deploy to production job start so this is essentially the difference between a continuous deployment and a continuous delivery pipeline what we have here is a continuous delivery pipeline we're continuously building packages of software we're deploying them to staging but we're not automatically deploying to production without that manual intervention for some organizations this is mandatory and this is why i'm also showing you in some legacy systems you cannot always deploy everything without checking a few things in advance now if we're looking here deploy to production has also completed so we can take a look here if i'm gonna refresh this website i'm gonna get here version 40. and here on the staging also version 40. so now both staging and production have the same version now i hope that the difference between continuous delivery what we did right now and continuous deployment where every commit that lands in the master branch gets deployed to production makes more sense now hey how are things going are you like the course so far you're following along let me know by leaving a comment in the section below or by sending me a message on twitter or linkedin or on any social platform you can find me i'd love to hear from you and to know how are you using this course we are at the end of this unit so i'm gonna grab a coffee and i will see you in a bit [Music] so far we have worked with a static website and deployed it to aws it's probably one of the easiest scenarios involving gitlab ci and aws however modern applications tend to be more complex and most of them use docker nowadays so in this section we'll dockerize our website and instead of copying some files to aws we'll be deploying an application that runs in a docker container to do that we'll build a docker image as part of the build process store it in the gitlab container registry and deploy to a service on aws called elastic beanstalk so if you're eager to learn more let's jump right into it when we use a cloud provider like aws we can rent virtual machines that have a dedicated cpu memory and disk storage and we can use any operating system we desire but this also means that we are in charge of managing that machine we need to ensure that it's secure and that all software running is updated this is often too much overhead especially for some types of applications but there is a way to take away this complexity and focus only on the application we want to deploy aws elastic beanstalk is a service that allows us to deploy an application in the aws cloud without us having to worry about the actual virtual server that runs it this is a great way to reduce complexity and it's probably one of the easiest way to deploy an application in the aws cloud by default elastic bin stock can run python java node.js php and many other types of applications but it can also run docker containers which really gives us a lot of flexibility we'll be running a web server application that serves our simple website files so this time instead of just uploading some files to aws we are providing the entire application which is self-contained in a docker container i'm sure you'll be amazed by how easy it is to run an application here in the background we will be using a virtual machine but we don't need to worry about managing it however at this point i need to warn you about potential costs while running these servers for a few hours or days will most likely be free or cost you only cents if you let the service run for a month you may get some unexpected charges on your card even if you are not actively using a service once you have created it it uses resources in the cloud so stop any services if you are no longer using them find a way to set a reminder so that you don't forget about them with that being said let's start using elastic bean stock so let's go ahead and create an elastic beanstalk application so i'm here in the aws console and the first step would be to search for elastic bean stock so i'm going to write here eb and you will see here one of the results is elastic bean stock so since i have no applications here i'm getting this getting started guide so i'm going to click here on create application so let's call this application my website you don't need to include any application tags and here on the platform there are different platforms that are supported but what you're actually interested in is in the docker platform so that we can essentially deploy anything we want i'm going to select here docker i'm going to leave the defaults as they are and the first step would be to start with a sample application so we're not going to upload any code we're going to let elastic beanstalk create this instance this application and after that we're gonna add our own application on top of that so i'm gonna click here on create application and it will typically take a few minutes to get this to run and in the end you should see something that looks like this so what has happened here well we have created an application and we have initialized a sample application that elastic bean stock provides in order to actually run that application this wizard that we have used is set up has also created an environment so we have applications which is like an umbrella for environment so if i'm looking here at environments you'll be able to see here that we have an environment that is called my website dash nf it belongs to the application my website and then you get some information in regards to when this was created under which url is this available which platform is this using and so on in order to see like what this environment is doing we're gonna click here on it you'll be able to see here a link so if you click on this link this is the sample application that was deployed so it's just an idea tells you that everything is working properly that we have nothing to worry about this entire setup has worked without any issues now the question is what has actually happened in the background in order to understand that we're going to go here to the services and we're going to take a look at ec and right here ec and that is ec2 the service that we're actually interested in this stands for elastic compute these are essentially virtual servers that we can create here but we haven't actually created one but if you're looking here at instances you'll see that we have one running instance and this instance is called my website dash and you'll see here which kind of an instance this is this is a virtual server that's running here and this is the server that's actually running our application additionally if you're going here to s3 we'll be able to see that we now have an additional bucket so we still have our buckets that we have used for hosting the website but now elastic bin stock has also created a bucket so actually what elastic beanstalk has done has created this required infrastructure in order to run the application we didn't have to worry about creating that but this is why it took a few minutes to create this entire thing now let's go ahead and try to understand how we can deploy something on your own like how can we get our own application to work and because we're using docker we need to provide a manifest file essentially a file that describes the application that we're trying to deploy so i'm here inside the project again and if you go into the templates you'll find here a file called docker1.aws.public.json and this is the manifest file that i'm talking about it essentially tells aws which container we want to run because we have selected a docker platform we can only run docker containers there and this is a public container the name of the image is nginx which is a web server but what we want to try here is to actually use this configuration to use this file and to deploy this application to aws to make sure that this deployment process is working and this is again something that we'll do manually at this point just to make sure that everything works properly so go ahead and download this file and after that let's go back to aws and open up elastic bean stock here inside the environment will have the opportunity to upload a new version of the application you will see here right in the middle running version this is the sample application then we have this option upload and deploy and we want to upload that json file so i'm going to go ahead and select the file and i can also write here a version label this will be sample application version one that's totally fine and i'm going to go ahead and click on deploy and what will happen is elastic beam stock will take this file and then we'll start a deployment we'll start here updating the environment of course this will take a few minutes but in the end what we want to see here is health status okay wanna see that everything is working properly and want to see that this application has been deployed and is available at this address and now we're seeing here status is okay so health is okay everything seems to be working properly you can also go ahead and refresh this page just to see here which version is running you will see here that the running version is sample application one so this is exactly what we have deployed we can open up this address and we'll see here welcome to nginx so this is the welcome page from the nginx server which we have deployed by having this json file which describes which container we want to use with elastic bean stock so in order to actually deploy our website we need to create a docker image we need to provide this json file and of course we also want to automate everything so this is what we are gonna do in the upcoming lectures so how do we create a docker image with our website a docker image is created by following a set of instructions something like a recipe we store these instructions in a file called docker file so let's go ahead and create one so here from the web ide i'm gonna go ahead and create a new file and you will see here in the suggestions one of the suggestions is docker file so which are these instructions that will write inside the docker file first of all we have to start with a base image essentially an image that we want to add something on top of it in our case because we want to deploy web server with our files we're going to start with the engine x image it is exactly the same image we have used in order to test our deployment to elastic bean stock so i'm gonna write here from nginx and additionally what i highly recommend is writing a version so writing a specific tag so in order to know which tag to choose i'm gonna go to docker hub i'm gonna search here for nginx i'm gonna take here the verified content so this is the official image for nginx i'm gonna take a look here at the tags and let's search for alpine because alpine generally provides us with a very small docker image so what i'm searching here for is a specific tag so this here would be a very good tag to use just as an example i'm going to go ahead and copy that and here in the editor column and the version so you have to be very careful how you're writing this has to be something like this so we're starting from this base image so we have everything that is in the base image we have there so we already have the application essentially the application is the web server now the next step is to actually add our files which are in the build folder and to put them on this web server and we do that by using the copy command and right here copy so what are we copying we're copying the build folder then we have to specify where we are copying it now by default there is a folder where nginx will store files that we want to serve and that folder has the following path forward slash user usair share nginx and then html so we're moving this folder inside html folder so this is everything that we need to do in order to build our docker image these are the only instructions that we need at this point we're getting this base image nginx in a specific version because we're using this tag and then the next instruction is copy everything that is in the build folder and move them to this html folder just because we have created this docker file doesn't mean that something will automatically happen we still need to make some changes to our pipeline and because we're already making changes and because we want to deploy to elastic bean stock we don't really need this s3 deployment anymore so i'm going to remove essentially all the jobs that are related to s3 this includes deploy to production deploy to staging this deploy job which is just a template and also test website we're going to introduce a better way of testing so i'm going to remove all of them and what remains is the build website and of course also the stages here they can be removed so what we're gonna do is we're gonna introduce a new stage let's call it package and we're gonna associate this stage with a new job gonna call it build docker image and now the stage will be packaged so how do we build the docker image well it's relatively simple the command is docker build i'm going to also add a dot here because that's going to make a reference to the current folder and the current folder contains the docker file that we have created in order to be able to run this docker command we also have to define the image that we'll use and the image will be docker just using this docker image will not work we're gonna get an error the reason for that is the docker architecture is composed of a client and a server essentially what we have here docker this is the client and the client sends some instructions to a server actually to the docker demon which is the one who builds the job and in order to get access to a daemon inside gitlab ci we need to use a concept and that concept is of services we want to define here a tag services and this contains a list of services that we can start and what we're starting here is actually service called docker in docker you're going to see me using here this docker in docker tag so this service is another docker image that can build docker images cannot be accessible for us over a network and docker here which is the docker client will be able to talk with the docker daemon which is here inside this service i know that now at the beginning this may seem all bit confusing but this is like the minimum what we need in order to be able to build docker images from gitlab what i also like to do is to set some tags so i'm going to go ahead and write a fixed stack for docker and for docker in docker so i'm here at docker hub and this is the docker image you will find the link in the course notes because this is something that's not so easy to find actually not so many people are actually looking for docker and i'm gonna use this version here i'm gonna go ahead and copy it and i'm gonna add it here to my job image and additionally there's another tag with docker in docker this is the tag that i'm gonna use for the docker daemon i'm gonna remove here dent and this will be the docker in docker image you'll see both of them have the same version additionally when we're building images we also like to specify tags like label this will help us identify the images that we create because we can create multiple images of course in order to tag images we first have to specify an image name and also the tag so we can use here dash d you're gonna keep here this dot at the end that's very important don't forget it and what we'll use here is this is the list of environment variables and one of these environment variables is this ci registry image so we're going to use here ci registry image don't forget to put a dollar in front of it this will build the latest tag so this is a tag that always points to the latest image that we have created and additionally i'm going to create another tag which still contains cr register image i'm going to add here a dollar in front to make it a variable i'm going to add here column and we're going to use the app version that we are still having here in our job so we are creating two tags so don't forget the dot at the end this is super important this is one tag that we are creating and this is the second tag that we are creating and this variables will ensure that we have the right name for our docker image additionally in order to make sure that we have indeed built this docker images we're going to use here docker image and ls actually we are building only one image but we are tagging that image with two different tags the latest and the app version so this will show us all the images that are available with all the tags that are available in this instance i'm not gonna run this job only on the main branch i'm just gonna run it inside the branch i'm still here in the branch playing around so we're gonna see how this looks like after we execute it and once the execution is done we're gonna jump directly into the build docker image and try to understand what has happened here so we'll see where there are a bunch of locks that have been generated but most importantly what we want to see here is which docker image are we using so we'll see here this is the docker image which is darker and then we're also this is new we're starting this service with docker in docker so the service will be available over the network inside this image that we have right now this will give us the docker client and there are a lot of locks here and they're not really so relevant the interesting part comes when we're actually building the image and you'll be able to see here this is our command there are some steps how these docker images are being built docker works with concept of layers but we won't go into that but this is exactly what's happening here is every essentially every command that we're executing will create an additional layer on that image we have successfully built this so this is what you want to see it has been successfully built and then we have also the two tags that we have created you will see here the latest tag and this is also the version tag now they both point to the same image but they are different text and you will better see it here with this command that is listing images so this is here the essentially the name of the image we'll see here the tag and you will see that internally the image id is the same so we have the same image but with two different tags so now we have successfully managed to build and tag our image as you might have guessed the docker image that we have just built has been lost as soon as the job finished the docker image that we have created is not a regular file or an archive that we can just declare as an artifact and upload it to s3 when we want to preserve a docker image in this case this docker image that we have created here we need to save it in a registry for example docker hub which we've been using to find out tags and images that we can use is a public registry with docker images however typically our projects are not public so it doesn't really make sense to use docker hub so for that reason we need a private registry both aws and gitlab offer private docker registries and for this example we'll be using the docker registry offered by gitlab and you'll see here on the left hand side packages and registry and we're going to use here the container registry because docker is all about working with containers and of course at this point there are no container images stored for this project so alone just building that ochre image does not automatically add it to the container registry actually this is called pushing so when we want to save a docker image in a registry we need to push it to the registry so let's go back and make some changes to our pipeline so let's take a look how do we push so the command in order to push something is docker push and we actually want to push all tags so we're going to use here parameter dash dash all dash tags and then we're going to specify here the ci registry image so this is in term of pushing but where are we pushing and especially if it's a private registry don't we need to log in somehow yes that's correct so alone pushing this will not work in our case we need to do something before we push and we're going to do it right here in the beginning because in case there are some issues which the login we want to know as soon as possible and not only at the end so the command to login is docker login relatively easy and we're going to specify username and password for this service we're not actually using our username and password that we used to log into gitlab we'll use some variables again that will give us some temporary user and passwords don't really care so much about that but essentially this is what we're to do so if you're looking here at the variables that are available we'll find here ci registry see a registry user say a registry password first of all where do we want to log in this is the cia registry so we need to specify where do we want to log in and additionally we need to specify our username and password so we typically specify the username and password like dash u and then specify the username and we're gonna use another variable ci registry user with the dollar sign in front of it that's very important and additionally we can specify password p and we'll see here ci registry password so these are all the credentials that we need in order to log in in more recent versions of docker just specifying your password like this is not really the best way to log in you may get a warning and it's possible that in future releases this argument this dashb will not be available anymore so what i like to do is the following i'm gonna remove it from here just gonna copy this variable and actually what we can do here is we can say to docker login get the password from the standard input and i'll explain a second what the standard input is you wanna write here dash dash password dash std in this is the standard input so the standard input is essentially something that we are piping from another command and that other command in our case will be echo so we're going to echo this ci registry password but we're not going to display it in the logs we're going to use this pipe so this will be available in the standard input for the docker login and docker login will look at that standard input and we'll see oh okay the password is coming from there so we're going to grab it from there using it in this way will ensure that this password doesn't get exposed but really for ci in our case it will not get exposed anyway but it's any good to know and this is why we have this construct here so once again we are echoing this password and we're sending it to docker and docker login will know that this password is coming from the standard input and apart from that we have the parameter with the user for docker login where are we logging in to the registry make sure that all the variables that you are using here have a dollar sign before them otherwise they will not be resolved as variables so let's give this a try as well and see if our image lands up in the registry the build docker image job has been successful we can take a look at the logs to see exactly which tags we have pushed and you will see here this are the tags that are available so we had tag 46 and latest and this is pushing the tags to the registry so no errors here and what we can do next is to go here inside the registry inside the container registry and we'll see here this root image and if we take a look at it we'll see currently we have two tags we have tag 46 and the latest tag will also tell us the size of our image because we are using alpine as a base image also our images are relatively small so that's definitely a good thing so now we have successfully built this container but how do we know that application is actually working how do we know if nginx is serving our files if we have the right files and if everything is working as we think is working before we actually move on to the next step which is deploying to aws so for that reason it does make sense to do some accepting testing on the docker container itself just to make sure that we have everything as we expect so with that being said let's go ahead and add another stage i'm going to call this stage test and of course we'll also have to add another job let's call this job test docker image i'm going to assign the stage test and let's think about like how we can test this essentially the way we're testing this docker image is not much different than the way we have tested for example a deployment or any other things in the past we can still use curl to do that so for example i'm going to write here curl we need here an address http column forward slash forward slash we don't know the address but then again we can use grep to search for example for the app version right and of course we can also search for anything in the page that a website has but say for example here we're going to check the version that html that we have so just want to make sure that we have the right app version and remove one of the dollar signs here so what do we need okay we need curl so we need a simple image that has curl so we're going to use curl images slash forward slash curl so that should be enough to get this but the question is how do we start this container right this is where the part that we have learned before can be very useful and that part is services so again we're going to take advantage of the services to start the docker container that we have just so we're gonna write here services and this time that we'll do is in this list we're gonna define the name this will be the name of the image and the name of the image is of course dollar sign ci registry image and the tag will be the app version we don't want to use the latest i want to go exactly like what is a current tag that we want to see so this will help us start the image and additionally what we can do is to specify an alias an alias allows us to give a friendly name so that we know where this service is available over the network so i'm just going to give the alias of website and here in http the address that gonna use is simply website so http column forward slash forward slash website gitlab will take care of starting this docker container registering it in the network as website and then we can simply call it in our curl script with website gonna go to version.html and inside we're gonna search for the app version so let's give it a try and see how it works and the confirmation that indeed our docker image is working properly can be obtained from this test docker image job so in this case indeed we have checked that the docker image is starting an http server and that the files that we have uploaded in our case this version.html it is available there and it contains the application version that we expected so with that being said in this case we now know that we have a docker image that works and we can deploy it to aws if you remember the first time that we have deployed an application to elastic bean stock we have used this json file which described which docker image we want to deploy in order to automate this process we kind of have to do the same thing in other words to give elastic bin stock this file now whenever we're interacting with aws services we need to provide some files most of the time we need to do that through s3 so we first need to upload those files to s3 and all other services can read from s3 and have that information so in order to automate this we need to generate essentially this json file upload it to s3 and from there we'll tell elastic bin stock to start a new deployment so going back to the pipeline what we need to do is to re-enable the deploy stage and the deploy to production job this time focusing on deploying to elastic pinstock so let's go ahead and add a new stage deploy and of course also a new job here deployed to production since we'll be using aws cli i'm just gonna add the basic structure that we need to this job in order to be able to use aws cli so using aws cli as a docker image we're overriding the entry point this job is part of the deploy stage we have set the environment to production and we are starting our script now if looking at the files here we'll see here in templates that we have this docker run that aws the name of the file is not that important the format of the file is very important because this file tells aws elastic binstock what we are trying to deploy and it will say here that this is our image and this is our tag so we have them here as variables but additionally because this is a private registry we also need to provide some authentication information and that authentication information is another file so that file is this oauth file and in this auth file we will need to add a token that gives access to the registry and essentially what we need to upload to s3 is this file docker run and this auth file which is actually linked from here it's mentioned here of that json so let's go to the pipeline and quickly add these files i'm going to simply paste here the configuration required so now i've added this copy configuration to the script so we're still using aws s3 copy it's pretty similar to what we did before in terms of copying files to aws s3 so i'm going to skip a bit this part now as you may have noticed these files contain some variables and those variables won't get automatically replaced so if we upload the files as they are they will not get replaced second of all we don't even have the right paths here they are in templates and not in the current folder where we're executing the script just as a second thing but i wanted to show you something very important and that is how to do environment variable substitution in files and for that we're going to use a very cool command it is env so from environment subs so relatively easy to tell what this does it replaces environment variables then we need to specify like what is the input file and we do it with this smaller than sign essentially and then we're gonna go here and say something like templates and give here the name of the file and then we will have to specify the output file that will be generated and the output file will be this one so the name of the file will stay the same the location will change it will be put in the current folder and this is perfect for the next command that we'll be using we're doing this environment substitution replacing all the variables here for the auth file just as well so i'm just gonna replace that there and there so any environment variables that are in these files will get replaced in order to use this command we need to install an utility which is called get text now this doesn't come directly in this amazon image so this is something that we need to install additionally i'm going to use this package manager to install this dependency which is called get text and get text has this tool for environment substitution gonna also add this flag here this will essentially answer with yes any questions that this installer may ask because we are in a way that we cannot interact with an installer we're gonna specify this and say okay just install it if there are any questions answer with yes just to ensure that the installation process works without any issues now we have this so we can do environment substitution of course it's going to be very nice to actually take a look at these files so we're going to use here cat just to make sure that these environments were substituted as we expect them and as a final check we should also go in these files and make sure that we have all variables here so see a registry image app version this is something that we already defined so we have this information here aws s3 bucket this is relevant for where we are uploading these files now because we are using elastic bean stock an elastic beanstalk has already created an s3 bucket well maybe we should use that one so i'm gonna switch here to the s3 service and what i'm going to do is i'm going to copy this name could have used this other package that we have created here but i don't want to put anything any credentials in a public folder so with that being said i prefer to use this bucket that is not public and here in the environment variables of the job we still have this aws s3 bucket for production so i'm gonna click here to edit it i'm to change here the value i'm not going to protect this variable because i'm still in a branch and i'm going to update this so at least at this point we have everything let's see this other file the off file you'll see here in the auth file there is this deploy token now what is the thing well we have pushed our docker image in the gitlab repository but aws has no credentials to connect to it and again we're not providing username and passwords not our account username and password we will generate a token and gitlab allows us to generate a deploy token that can be used by aws to log into our private docker container repository and pull that image so now to do that from settings we're going to go here to repository and we'll see here deploy tokens as one of the options and here we can create a deployed token so we're gonna name it aws so that we know why we have created it and i give the username aws and the permissions that we need is read repository and read registry i'm going to go ahead create this deploy token this will be created here this is essentially like a password but it will only be displayed once so i'm going to go ahead and copy this and i'm going to create a new variable to store it i'm going to call this variable gitlab deploy token and format how we'll store this information is i'm going to write here the username aws column and then the password that i've just copied i'm not going to protect this variable but i can just as well mask it just to be sure now here inside the pipeline we still need to do a small change to the pipeline itself to the script itself and the thing is we need to convert this username and password to base64 because this is what we need inside this configuration json this is what aws expects and in order to convert something to base64 it's relatively easy so for example say you have here a string right hello you can convert it to base 64 by simply piping this and writing here base 64. this will output the string encoded as base 64. now of course we don't want to have anything hard coded here this is why we have defined this variable so we are going to use here gitlab deploy token right and additionally we want to make sure that we don't have any new lines or anything like that in our token so we're going to pipe them again to another command and this command will remove any new lines that exist here so so this n here with the backslash this will be a space essentially so if there are any spaces they will be removed additionally we need to set this into a new variable that we can replace so i'm gonna put here a dollar sign at the beginning i'm going to put this between parentheses and what i'm going to do here i'm going to export so this is a way to create an environment variable but from a script and the name will be deploy token right and equals this expression so what we have here this part is an expression this will be evaluated and whatever output comes from this it will be stored in this deploy token and this deploy token is exactly what we have here in our pipeline so before we commit these changes we only are interested in making sure that this job works well so what i'm gonna do is i'm gonna simply disable all the previous jobs i'm gonna put a dot before them so that they don't run and after that i'm gonna commit this pipeline and let's see it in action and it seems that i'm in luck no errors while running this job let's take a look at the logs to see if we have everything and indeed it seems here that something is missing right some information has been replaced the image is replaced here i have the version i have here the bucket that's definitely working fine here in the odds the registry is correct but if i'm looking at all that's information it is missing so let's try and debug and to understand why is this information missing there so here's the pipeline let's double check if we have everything correct if we have used the right variables so gitlab deploy token that should be the name of the variable that we have already find in gitlab as a variable so that seems to be fine deploy token is exactly that what we have used in the file we can double check again the file you see here it's exactly deploy token so that seems to be working well so what is going on let's take a look again at the pipeline to see if there are any clues why this may have failed you know again here we don't see it but let's take a look at the commands that we have executed to see if there's anything suspicious in there so what did we do well first of all we have just started the script here we have aws cli great we are installing here get text so that seems to be working don't see any errors up to this point and then we're exporting this variable right we're running this command this expression and then if we're looking here at line 77 we're seeing here td command not found so apparently this expression that we have placed here gets evaluated into this variable but when it fails it doesn't feel the job i'm going to open up here the pipeline and let's take a look at the deploy to production so this is the td command that we have used it should actually be tr which stands for translate now this was just a silly mistake but it just goes to show how important it is to actually read the logs to understand what's going on now we have both files all the variables have been replaced including this time in the auth file and we can also jump into s3 and take a look at this bucket here we'll be able to see here the docker run file and the auth file they are now inside s3 docker run tells essentially contains information about what are we trying to deploy links and mentions this auth file which contains authentication information about how to connect to our private registry now let's continue with the actual deployment so we have copied here the docker1.aws.json and the auth.json file and they are in this s3 bucket and now we can initialize the deployment and the deployment happens with the aws cli but this time we'll use a different service so the service to which we're trying to deploy is elastic beanstalk so i'm going to use aws elastic bean stock and this is something that is done in two steps first of all we have to create an application version so the command that i'm going to use here is create application version with dashes the next command that we need to run is update environment so i'm going to copy this and i'm going to write here update environment so why are these two steps necessary in the first step we are taking this docker run file and we are creating a new application version and then once this application version has been created we actually tell elastic bean stock which environment we want to update on elastic bean stock we have the application but there can be multiple environments in our case we have a single environment but theoretically it's possible to take one version to create it once and to take it through different environments but in this case we only have one environment so maybe this is why it may look a bit weird in the beginning but these are the steps that are required in order to get this to run now just saying create application version and update environment is not actually sufficient in order to get this to run we need to specify some additional parameters and the first parameter is we need to identify the application right so this is done by writing here dash dash application dash name and if we're looking here inside elastic bean stock you will see here this is the application name i'm going to copy this as it is here and of course i have the possibility of putting it here directly like application name like this but i don't want to do that i want to take advantage of variables there are multiple places where we can store this but i'm just gonna put them here inside the job itself so we can add here variables block and for example i'm gonna call this app underscore name column and then this will be the value my website so whenever i need to use this i'm going to put it here so application name dollar sign and i'm going to reference the variable now there's something you need to pay attention here because my website is composed of two strings so there's this space here in between if we put this like this this variable gets replaced it will look like this so this command will think that my application name is my and then that website is some other command that we're executing or some other parameter that we're giving here and this is not something that is recognized so in order to get around this we need to put this between codes so we'll have this string between codes because this is why it will know okay it is my and website it contains also the space and the word after this so again i'm gonna also put the variable here between quotes and this application name is also needed for the update environment command so i'm going to copy that there as well additionally when we're creating the application version we need to specify like a label or which version we are deploying we already have an application version here so it does make sense to use that so i'm going to go ahead and right here dash dash version dash label don't worry if this goes on a new line it's just the way it's edited but actually the command is on a single line and i'm gonna use here app version and this here for update environment yes it does also make sense to specify the application version and i'm gonna add the same command here the same parameter version label to this command now also when we creating this we need to specify somehow this file right so elastic bean stock will not know which bucket is this file how can i read it and everything so for that reason we also have to specify that and we're going to use this additional parameter source dash bundle and this will allow us to specify an s3 bucket and the s3 bucket will equal this time the bucket that we have used so it's this one here and we also have to specify the name of the object that we're referencing and this is s3 key you need to pay attention how you write this it needs to be exactly as i have written it here but i also noticed people in the beginning they don't understand when they are specifying these additional parameters here is not for example it's not equal application name equals app name it's just a space and then comes the value here this is a bit different because this entire thing this entire configuration here is a value that will be read later on so for that reason here it's okay to have this equals but otherwise there are no equals in these commands all right so to create the application version we have specified application name we have specified the version label and additionally we have specified this source bundle so essentially where is the file that tells us what we should deploy this is almost the same as when we have uploaded that file manually but with that we have also triggered the deployment automatically so we have done two steps in one step here we are doing it in two separate steps so the next part is after creating this version is to update the environment so we have here the application name we have here the version for which application are we deploying this which version are we deploying the next step would be to also specify which environment are we updating i'm gonna write here environment dash name and we also need another variable for the environment i'm gonna call it here app and name you're free to name them as you wish and again i'm going to go here to elastic bean stock and i'm going to copy this i just want to make sure that i have everything exactly as it is in elastic bean stock if i specify something else then it's not the same so the environment name will be here and as you can notice here this doesn't have any spaces so we don't need to put this value between codes and also all the other values that we had here there are no spaces so we don't need to put any quotes there but if any of these parameters would have a space that value would need to be between quotes finally let's also go ahead and re-enable these jobs because we'll still need a docker image now so we will run the entire pipeline and check if our deployment is working properly however when we take a look at the pipeline we'll see that the deployed production job has failed so let's jump into the logs try to understand what has happened and why this job failed you can see here the commands that we have executed so copying these files to s3 still works so no problems there the last command that was executed is this one so elastic beanstalk create application version so there's a good chance that this command is responsible for whatever error we're getting here and then we need to look into the logs and see what is the error and we can see here it says something access denied essentially it's telling that our user which is the gitlab user that we have created if you remember we have created this user with programmatic access is not authorized to perform some action so create application version if you remember we have selected that the user is allowed to work with s3 resources right so uploading files deleting files things like that with s3 that works but we haven't authorized our user to work with elastic bean stock so we need to change that so from the aws console let's go ahead and open the im service for identity management we'll open up our user you see here this is the username gitlab and what we can do here is essentially to attach additional policies so we have here this amazon s3 full access so that works but now we want to add an additional policy so we're gonna click here on add permissions and gonna attach existing policies i'm going to search here for elastic bean stock and i'm going to give here administrator access for aws elastic beanstalk now i'm not going to go into this policies in very detail we're just interested in getting this to run but of course if you would use something like this in production you would need to be very careful about what each user is allowed to do so just keep that in mind just using this very generous policies in production is typically not a good idea so i'm gonna add these permissions then and now we'll see that our user now has this additional permission here so we can now work with elastic beanstalk so going back here to the logs instead of re-running the entire pipeline we can just go to this job and hit this retry button and it will retry the same job we haven't changed any files and dynamically this policy should now be applied to the user and hopefully this job will work so let's take a look to see what the job is doing and now this time it looks much better so we have no errors and let's take a look at the first command create application version now we have executed this and we're getting back a response right so it's going to tell us again like the application name which version we have which bucket we're using which is the name of the file and so on it also starting to process this and the next thing what we're getting is the update environment so that work without any issues it's going to tell us again what is the environment name so we can take a look at that application named version and also some technical details about this we can take a look back into the aws console and then look here into the elastic bin stock service to see what's going on and we'll be able to see here running versions is 60. i can click here on the environment just to make sure that we indeed have everything the health is okay so we can also go ahead and click on this url and this will open up our website so it does seem to work very well so now we have manually tested that the deployment has been successful but how about that we check in our pipeline that the deployment has been successful and we can use the exact same approach that we have used so far i'm gonna jump back into the pipeline and here inside the deployed production job essentially what we want to do is to use this curl command so should be pretty similar to what we have here when we're testing the docker image i'm gonna add it here to a new line of course we also need to get the address itself of the application of the environment and if you remember we had something for that already we just need to update the url for the application so i'm going to simply copy this url here and from gitlab i'm going to go to deployments environments and for the production environment i'm going to go ahead and change some settings and we're no longer using the s3 url we're using this one and remove here the forward slash so this will be my external url for this environment and what we want to use here is of course the variable that will get us this environment and from the predefined variables probably remember we have used this in the past ci environment name slug url we actually want the url so i'm going to copy this get here into the pipeline configuration and instead of website can also replace http because i don't need that anymore so i can use ci environment url forward slash version and then we're getting that version from there now i'm going to tell you if we leave this command as it is right now this will not work and the reason for that is when we are updating this environment aws needs just a few seconds to actually do this deployment to put the new version on the environment this is not instant but the connects command will run immediately after we have essentially triggered this update and that environment may not be ready or may still have the older version so for that reason this curl here will fail so we'll not get this response the deployment cannot be verified at this point that it has been successful or not so what we need to do here is to wait a bit so what we can do is of course to use something like sleep and wait for like 10 seconds or something like that but in aws there's a better way to do that when we're using the aws cli we have a tool which is called weight so weight is an option it is for various services that are at aws including for elastic bean stock so we can right here elastic bean stock weight and then what are we waiting for we are waiting for this environment to be updated so i'm going to write here environment dash updated and then we need to specify the application name the environment name and the version label right so essentially everything that we had here on the previous command i'm gonna simply copy and i'm gonna add it here to the new line and this command will do the following it will essentially in the background it will check with aws hey are you done updating that environment no not yet i need some time okay wait a bit and then we'll try again hey are you done updating that environment okay i'm done good and then we can stop waiting and then the next command the curl command can run and then we can check if indeed the correct version has been deployed and if our environment is working properly so i'm going to go ahead and commit these changes and we'll let the entire pipeline run and see how this goes and the pipeline is still successful so we can take a look into the deploy to production job and see what exactly happened here and you'll be able to see here that this command is executed and after this the next command with curl is also executed and it passes without any issues you will see here version 61 has been deployed this is exactly also the version label that we have here defined so this again confirms that the version that we wanted to deploy has actually landed on the environment where we wanted to have it so that's about it in terms of deploying to elastic beanstalk so just to recap we have gone through all the stages and we have started by initially building our code whatever we have here compiling it building it running some tests and then publishing this as artifacts and then we have created a docker image essentially we have created an application we have tested that application and then of course we have really a simple pipeline here we didn't went through any other stages we deployed directly to production but the principles that i've shown you here can be used for similar projects and of course based on all the other information that i've shown you throughout the course you can build power plants that are more complex than this one but the most important thing is to understand how you can build such pipelines how you can deploy them and how you can end up with something that works and generally to make this process as enjoyable as possible since now we are coming towards the end of the course i thought it would be a good idea for a final assignment and what i have here is a project which will document who has completed this course so what you have to do is to check the course notes and in the course notes you will find the link to this repository and go ahead and click here on request access and in a few hours probably you receive access to this repository and you will be able to make changes to it so you can open the web ide and open a merge request and you will have to add if you wish of course your name to a list of people who have completed this course so let me show you how to do this so once you have access to this you will no longer see this part with requesting access and you can open the web id and you will be not asked to fork this project that's the main difference and then you can go ahead and change the files the files that we want to change are located here in source and the file that contains the code is this app.js and here there will be a table and i invite you to add your name and other information to this table so for example i've added here my name i had it here my username for gitlab but also my country of origin and also message to the entire world and i'm gonna submit this as a merge request and i invite you to do the same so i'm going to go ahead here commit this and i cannot commit directly into the main branch so i have to go through the path of creating a merge request so i creating a feature branch with my name let's give it a meaningful title and then i can just go ahead and create the merge request once you have access to this project you will be able to see any other merge requests and i invite you to take a look at them to see what other people have changed to make sure that everything is working properly and if someone breaks the pipeline in their own branch maybe you can give people some tips in regards to what they did wrong and what didn't work so well essentially be part of this review process try to understand how to collaborate on this project and of course once your merge request gets reviewed it will be merged into the main branch and then you will be able to see your name appearing on a web page so i think that's kind of a nice and an interactive way of essentially concluding this course almost and so yeah i hope you will do this along in terms of editing let me give you an advice once there are a few people that have been added to this list here what i highly recommend is that you don't just add your name at the end because that's the highest chance that you will run into complex so when you're trying to make changes try to put your name somewhere in the middle or something like that between others try to keep the indentation and everything so that it looks nice but yeah that's my advice to you so see you even after this course i'm gonna collaborate more inside the merge request try to play more with gitlab with pipelines see how it works and yeah looking forward to your contributions all right you did it this is the end of the course but don't go away yet i still have some valuable tips for you first of all i want to give you a final reminder to terminate any aws services you have created so that you don't encounter any unexpected costs we have accomplished so many things in a very short amount of time i know this was a lot to take in but i hope it was useful and that this has opened your appetite for learning more about devops gitlab and aws if you enjoy this content you could support me in creating more courses like this one by going to my youtube channel and subscribing link in the video description thank you very very much but there's also so much more to learn if you found it hard to work with cli commands i do recommend learning about unix utility commands and bash there are also other gitlab features worth exploring if you like working when deploying docker containers you may also want to learn about kubernetes for all the topics mentioned above and anything else i forgot to mention you will find links in the course notes if you enjoy my teaching style and you want to take a more advanced gitlab course go to vdespa.com and check out the courses that i'm offering if you are unsure which course is right for you just send me a message on social media i'm more than happy to help i hope you enjoy spending time with me and i will see you next time
Info
Channel: freeCodeCamp.org
Views: 534,170
Rating: undefined out of 5
Keywords:
Id: PGyhBwLyK2U
Channel Id: undefined
Length: 296min 37sec (17797 seconds)
Published: Tue Mar 01 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.