How to Deploy ML model to AWS Sagemaker with mlflow and Docker - Step by step

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello today we will deploy a machine learning model on adobe sh maker firstly we will integrate our model to ammo flow user interface then create two docker containers the first one will be a docker desktop while the second one placed on aws acr from here we will deploy our model to aws h maker where all model artifacts will be saved to available s3 in the result you can make prediction to your model as a regular user from a local terminal so let's get started and firstly we need to prepare python virtual environment for that okay so before creating a python virtual environment for this tutorial i want to tell you that for this i am using a python 3.6 and then i will install ml flow version 1 0 0 and then i will use a docker installed on my local machine and overall i will use a mac os and so let's get started so we can create a pattern a virtual environment using conda and i will use like this i am typing a conda create and then double dashes name and it will be let's say a deploy underscore ml and for this i'm using a i say set you before 3.6 it's a python version so let's press enter and it's still creating i agree with the conditions press enter and let's waiting to be created my virtual environment okay it's been created successfully so i can to activate it by typing a conda activate deploy underscore ml so i can copy and paste okay it is activated on the first terminal and then i will activating it on the second terminal so we have activated and created a python virtual environment for our tutorial uh and for the next step we will use two terminals here is the terminal number one and here is a terminal number two so if you have with me so let's move forward okay and now we are ready to install dependencies in our virtual environment and let's start with the emerald flow so we installed ml flow with pip install and then minus q it is quite ml flow and i'm using a version 1.180 and it's zero [Music] okay still waiting for being installed on my virtual environment so let's just a couple of seconds to be installed okay i see that case no errors that's fine and we can check if it's installed correctly it is pip list and weighs our email flow and our email flow is right here so okay it's created and it is correct so we are ready to install pandas library it is for open our train dot bi uh file so pip install and pandas the version is doesn't matter so okay is being created so and then i am using a secret learn library to make a prediction on our model and of course to perform a training progress so pip install secret learn and press enter it's installing installing just wait a couple of seconds and it will be installed yeah okay we have installed it the next step what we have installed in this uh in this moment is aws command line tool and it is very important to be installed in our tutorial because we will interact aws by comment line in our terminal so it will be it will be installed by pip install aws cli and i will use upgrade function in here and user and user tag in the end of this command line it's installing installing okay looking nice okay it's successfully installed aws command line okay and next i will install about a3 library it's also help interact with aws and this library will identify your users your user with between terminal and aws console okay let's install it let's install boto3 [Music] okay it's also being installed you can check all your install libraries in pipe list list in here and i'm sure that everything is all right on my site and so we are ready to move forward it's a good time to create a new aws user in this step and for this let's go to iam and edibles management console so let's click on here and this window we need to select a user option right here so click on this and in user window we need to create a new one and let's add user in here and username it will be my real name it is a vital test so let's set your username as you want it doesn't matter but don't forget to use access type programmatic access it's enabled an access key id and secret access a key for the aws api command line and sdk it is very flexible option and exactly what you will need in this tutorial so go to the permissions and in permission step we need to create a new group of policies so let's create a group and let's create a new group and set a name for example my user policies so and in policies i need to select the two policies in this list the first one is a amazon stagemaker full access it is right here yeah it is amazon sagemaker full access so i select the first one in here and the next policy what i need for my user user is amazon ac2 and then container and registry and it will be also full access it is right here so i selected also in my list yeah it is amazon ac2 container registry full access so i have a two policies in my group and i'm creating it right now so yeah as you can see it is my my user policies and attach policies is this is the first one and you can check the second one it says for sage maker so okay i see that everything is okay right now so i go to that tags and it is optional so it is not necessary to fill it up so i'm skip it and then go to the review and as you can see it is user name is a widow test access type is programmatic access with an access key and then no permissions no permissions boundary okay it's work for us and then uh my user policies group of policies is attached to my user so i'm creating the user with this details and this very important step you can see that is success of creating your user it is a username and it is very important to just make a copy of your access key id somewhere in your notes okay for example i am copying here and then it is a secret access key it is only one time you can to see it and i suggest you to copy it and also put on your notes you will use it in the next step in this tutorial and this is very important to keep it safe so i'm copy and paste it in my notes and then i feel comfortable to close this guide to create my user ok i see that so far is good and now we need to set up aws command line configuration i hope that you are still enjoying this video and now let's create a setup for adobe imu user in our command line tool so okay and let's go to our terminal and in here let's type and aws configure it is aws and configure okay and as you can see you need to enter your credentials in uh in this step it is the first one is adws access key id and remember adobe's access key id it is from the what we have copied from the user interface and copy it right here and then i need to use a edible secret access key remember i told you that you need to keep it safe and then i copy and paste it in my terminal then i need to enter a default region my one it's uh us is the second one so you should to check what is yours let's go to the console and for example my is us is us is the second one it is on the list from here so let's check what is on your side and i suggest to go to the main page of adobe management console and press here and it is one that is highlighted it is your so let's type your region in terminal and let's use the second part of the name don't type the first part what is involved you need to type the second part in my case is u.s east and the second one so okay and i press i type it again to make it more clear and yeah okay and default output format and i think the best option to use a json file and let's write it as json okay i see that the all credentials for my user is being entered so do it on your side and let's move forward so before doing all the following steps we must be sure that if our freshly installed ml flow service is working fine on our local machine to do it type the following command in the terminal and for this i am using the not the first terminal that i used before but for this i'm using the second one it is here the lower one so and for this i'm using the command ml flow and ui it's meaning a user interface it is a mlflow user interface so and press enter and as you can see this terminal shows some details about our amazon service and let's keep attention to listening at the http 127.001 and it's using a port of 5000 let's use this address to enter to our uh ammo flow service so http 127 and zero zero one and five thousand so let's enter okay and you see the ammo flow user interface and here you should see experiment and models so in this step we are sure that it's working fine and that's what we expected to see and we're ready to move forward if you saw on the previous step when we run ml flow ui command in our terminal the new folder ml runs is being created in our project directory so in this step we are going to adopt our python script for ammo flow and as you can see right here it is a very very simple python script which trains uh machine learning models for classification problem and it's using a very simple uh iris data set is to make more simple in this tutorial and we can test it in the terminal how it's working in simple way so let's write a python and train dot pi okay and we can see that the mean square error is 0 0 6 and technically it's working very good so we assure that we have a good dependencies in our virtual environment and we are ready to make some correction in our code to make it more readable for ml flow okay let's clear the screen right now and open the script again i'm using sublime text okay and it is very simple uh python script and we need to make a same changes in here so first of all let's import um ml flow and as you can see in this example we are using secret learn library to make a training of our model so we should use a ml flow and secular it is a special functionality uh delivered by mlflow to perform secret learn uh modal metrics okay and what is more we need to create a new experiment for training our model it is ml flow and then it is a special function set experiment and let's say our experiments will be my classification model okay [Music] and then we need to make some changes after splitting a training data set in this position so with ml flow start run now we are defining the situation when we need to start a new experiment for training our model so and run name will be let's say my model experiment sorry my model x permanent yeah that's looking good and we define is as name run so with ml flow start run we are initialization initiation run and experiment name is my model experiment and we define it as run variable in our code and then we can to indent this part of our code and also make some changes okay the first change is mlflow log param log param identifies which parameters in our modal performance we want to track in mlflow user interface it will be user estimators it is this parameter from our current machine learning model we need to track in the emerald flow interface and not like this but we need to use this kind of expression the name of a parameter is the num estimators and we need to track this variable okay the next thing what we need to do is after prediction we need to block the model performance so emerald law so you could learn and we we need to lock a model and the model is random forest random forest which has copy and paste right here and the name of this model is a random forest model simple like this okay what is more we can make a tracking of mean square error ok ml flow log metric not pattern but metric and it will be mca the name of the metric we want to track and mca from this variable that is delivering the values okay and after all of this we need to add some more lines the first one is a run id that is very important so keep in mind it run id is getting from run info then run w and id it is getting from our experiment which we defined as run we are using it right here run id then experiment id experiment id is equal to from run we use info and then experiment id like this one then we can to finish our experiment so end run and brackets and after this we can to just print some information for example artifacts uri we will need to use it in the next step so it's very beneficial to keep this information in the terminal or somewhere in your notes so i like to print it out maybe i will use it so ml flow get artifacts uri and brackets so i can to print i can to print also a run id run id and it will be equal to run id from from here okay and if i don't forgot anything i think i can save it i think i made the correct changes in my initial model code so i think we can to make a test so python drain dot bi let's see okay i made uh some syntax error okay it's coming from ml flow and here and here and here and here okay save it again clear the screen okay trying again okay i see that it's working fine it's what it's very very cool and you see okay we can see from the beginning mca is 0.067 artifacts uri is like this one um run id is five two a a f a two zero five okay and that's it and that's what i exactly expected to see okay and what i want to uh tell you is you need to just refresh your email flow user interface and look at the left side in the list of your experiment you can see that it is experiments and this is your model this is your experiment my classification model so if you go deeper in here you should see a two lines and in the third in the second row it is a run name it's a my model experiment this came from your python script it is your user and this is a source python code it's a strain dot pi this is the model that you use it's the second learn here's a parameter you track the number of estimators system 100 and msa it is a mean square error is 0 0 6 7 so let's go deeper you can see some very very valuable information like here it is some artifacts it is a model itself the fertile environment parameters where you can see a run id where you can see a creation time and a lot of metadata it is a special conda dot email file uh defining your virtual environment it is a model pkl model itself file requirements file that defines the main libraries that you use in your training progress so that's it for this part and if you see the same on your side it's very very nice so we can move forward from this point so now we have very important part in our video tutorial now we are having a working python script which is integrated into ml flow for tracking machine learning model parameters and to lock some parameters now we are ready to create a docker container and push it into nokia desktop application and to aws ac air it is elastic container registry we need to create some tokens which allow us to communicate our terminal with aws using our local machine and for this we need to use a export command in our terminal for example we need to export variable the first one is is aws access key id and this aws access key id is equal to what we have copied from the user account it is mine so and the second variable that we need to create is export aws secret access key remember when it all that you need to save your secret access key in your notes or somewhere because it is very important uh not a password but it's like a token that you need to save for securely connection to aws from your terminal so this is mine remember we have copied it when our user is being created at the beginning of this video so now we have created these environment variables and finally we are ready to create a local containers and push it okay so let's come back to our emma flow user interface and in experiment you can see your experiment this is mine this is my classification model so press here and then uh you could see two lines or maybe more lines that is generated from your runs and experiments and in here you should select this this row that contains a model inside in my case it is a secret learn so i press on this row and then if you scroll down you should see a artifact section it is yeah it is here and just expand and you should see some artifacts it is a machine learning model information it is a conda environment the details this is the model itself it is requirements for your environment so and if you go here you should see the full full path to your artifacts uh directory so if you go to your explorer it is email runs folder and you can see now we have experiment number one it is one so the artifacts folder is 52aafa205 and so on and so on and remember this full string is your run id it is right here next folder is artifacts that we will need and then it is a random forest model let's go here and from this files from these files we will create a docker container okay let's activate our terminal and let's clear again so now we are on the project directory and this is mlram's folder okay let's go to email runs then select folder number one then remember we need five to a afa this is a run id so like this and then we go to artifacts [Music] okay and finally random forest model and what files are inside this folder it is the same files as you can see right here it is on this list and it is here on mlflow user interface it is the same file from different point of view okay and now i'm sure that we are prepared to build and push container to docker desktop application and to aws ac error so let's do and for this i'm using ml flow functionality then activate sage maker option and it is a very very powerful comment from sagemaker functionality it is a build and push container okay press enter and let's wait a little bit ok i see that uh the progress is on the way and it's creating and login is uh successfully so let's wait a couple of minutes it can take approximately about five minutes it's depend on your internet connectivity to aws and just wait a little bit and then we will come back okay and finally we got all image layers pushed successfully to aws acr and in docker desktop application so now we are ready to check for the status and let's open a docker desktop application and in here we should see a two lines of our images the first one as you can see is dedicated for aws acr repository where our image is located and the second one is ml flow by func it is python function it is our local image on desktop application so now let's check what is going on on aws acr it is elastic container registry you can click on here or you can try to find it from your service search it is elastic container registry so let's let's click here and you should see a ml flow python image that is created from your terminal from here it is the same as you should see on your local environment and this image is remotely connected with aws acr okay and if you go here you should see a image tag it is a amber flow version it is a one point eighteen point zero this is a mlso version that we have installed on our python virtual environment so we can go here and it is a main details about your image that could be deployed in aws sh maker what we are going to do in the next steps so i can close a docker application right here so and see you on the next step okay and now we almost on the finish line in our tutorial and now we are ready to deploy our image from aws ac air to aws h maker okay and we are ready to do it and i just clear my screen from here from this terminal and just come back to my project directory it is not right here it's right here now we have a ml runs folder and train dot pi i just come back on my explorer in the same location it is right here and in here we should create a separate file it is uh responsible for for deploying our image to aws h maker so we are using a touch command in our terminal and then we are creating a deploy dot api it is a python script that will deploy our image from acr to siege maker so i'm just creating it it is right here and now we need to write some python script to do this action so let's open this python file with your python code editor i'm using a sublime text and um here is it so let's write our deployment code in this window and first of all we need to import an ml flow import ml close and siege maker as mfs okay then what we will need we will need first of all experiment id remember it is the one not it is a string yeah okay and then we need a run id and this is uh what is it and yeah remember run id is right here the name of this folder so i paste it or you can you can find your run id in in here so you can copy and paste it in your python code then we also need a region and remember we have a region of us is and the second one it came from right here from the aws itself and then we will need aws id okay leave it for later then we will need a arn a r n is an let's say a remote connection with stage maker iam roll and we will create this role a little bit later for temporary usage so i will leave it for later and then what we will need is application name and application name is let's say is a model application then model uri uri okay it's equal to ml runs then okay i think i will use a f string after ml run it is a experiment id then it is a run id and then okay this is artifacts and then [Music] artifacts yeah random forest model okay i think this is enough so and then tag id remember tag id is from here it is image tag and we are using a ml flow version 1.18.0 this is our tag id okay copy and paste right here it is a string okay and what is more so we can define our image url and it will be um aws id plus um dot dkr dot esr it is the extension of esl image container yeah and then plus region plus [Music] dot amazon aws dot com okay then ml flow and pi func because we are using uh uh this name of the image it is ml flow python it is a python function is the stands for okay and at the end we are using a stack id this is a format how we calling a image url it is very important so pay attention and be careful writing this line okay then we are using a ml flow siege maker and with this mfs we are implementing a deployment mfs then deploy functionality okay and what we are deploying okay first of all it is application name then model uri equal model uri from what we have defined just before then we should define a region name is a region then we should define what is the mode and mode is a create an application okay next is execution role execution roll from arn and this is what we will do a little bit later in a few minutes we will create a new role for full access of sage maker it is for temporary usage only dedicated for deployment purpose so we will define an air a little bit later then image url and it's equal to a much url and that's it in this code we have some points what we need to fill it up it is aws id and a r n for sagemaker full access iam role so we'll define it just right now so in here i think we can start with more simple one and i suggest to start with aws id and you can get aws id in your terminal with very simple comment so i just clear the screen and then let's try aws sts then get color then identity identity then query then account and output and output is text okay and here's it here's uh our adwords id we can make a copy from here and paste it here and yeah this is a string yeah okay that is look as we expected is so far so good and now we should to create a a n role for stage maker full access is for temporary usage okay then create a a n for c maker full access so and let's go to adobeless console and here so let's write i am in this field then go here and manage access to aws roles and and then select roles in here we need to create a new one and it is here okay and then we need to check what is the siege maker and here is it um yeah i think that's correct yeah it's amazon stage maker full access and it is uh correct one this is what we need so i go to tags i skip it because it's optional okay roll name it's edible s page maker for deploy ml model okay and we are creating the role that's it okay and here is it in the list of roles and let's go inside and and this is what we need this line it is roll a rn so we need to simply copy yeah and then paste it in our python code so boys our python code is here and i just copy a bit yes right here so i save it okay and one last thing what we have to do before deploying carbon container image from ac air to aws sage maker is to create a permission for our user to access s3 buckets okay and let's go to the users and then this is my user in swedus uh okay and then i need to add some permissions here is it um no and here i can to add online policy from here it is a service and from the service list i am looking for s3 here is it and i select the all actions and then resources on resources i okay i selected all the resources to make it more simple and review policy is the name of this policy let's say it will be a s3 full access for deploy okay and creating a policy here here's this in the list of my user policies and and this is what we need to deploy a model from ac air to aws h maker and to save model artifacts on s3 bucket okay i think that's correct and we now need to run this deploy.pi file and let's do like this and i think i forgot to check one thing um i just go back to my deploy script right here and you see just add the sign at the end it's double point okay now it should be working fine okay and let's try to do it so we run deploy dot bi okay let's wait and hope it will be successful okay and i see that it's finally getting successfully and the full progress of uh this deployment uh was about uh 10 minutes so don't be afraid for the long progress and we can go directly into aws and check what is going on so first of all i go to stage maker okay and then what i need to check is the inference and in here go to the models yeah okay you should see it is deployed model it's the model with name application model okay if you go inside in here we can see where exactly is the model location as you can see it is on s3 as it also at the very beginning of this tutorial the artifacts will be saved on the s3 bucket this is a file that is containing this artifacts and then let's go to endpoint configuration here has it here some another data also s3 location to be stored to data collection so for example if your model is collecting any kind of data it will be stored on s3 and it's very good solution for this so the end point and the endpoint name is modal application that we have defined in our deploy code and it's works like this if you go inside we can see the type is real time and status is in service that is perfect for us and that's it what we need to check after after deployment progress so now we are ready to go to the next step okay and now we on the finish line in our video tutorial and i just clear the terminal window and come back in the main working directory in here and what i need to do right now is to create one another file is predict dot bi and with this file i will make a new prediction using a deployed model on switchmaker so here's the file that i created in seconds and i need to write some additional code in here that will make connection to aws stagemaker and make a new prediction with the data i ingest into the model okay don't worry about the script you can find this in the repo i will provide in the description of this video tutorial and okay i just need to add one brackets right here okay and uh with this script i can make a prediction uh by ingesting a new data into the deployed model on sagemaker and i made the script very quickly in straightforward way you can make it differently but the main idea this script is make communication with a aws h maker model with both of three it uses a region application name and it can ingest a new data into the model and return a prediction from the model so to make it more simple we can print a query and you can see which data we are ingesting okay and we can make a prediction right here just uh run the script python and predict okay i see that the application status is in service and we're ingesting this data and the received response is zero okay what if we change the data we want to ingest it was 12 now i change it to 3 that means row is 3 of data you see here's a different data and response is 1.99 okay and the last test let's say i want to ingest 15. um yeah just save it let's write the new test okay it's a new data and the new response is 1.0 okay i think you got the main idea of how it's working and congratulations we finished this tutorial and and the next step is the bonus that they provide you to deep dive into this tutorial so again congratulations we finished this tutorial it was not easy one and as the bonus for you i suggest to check this repo that he made especially for you you can check all the steps that i explained in this tutorial it's about one uh hour length and it is everything in one place with all details nothing missed and you can check it on your site and do it on your any application as you want this is what i made especially for you and if you find that something is missed or you need to add some clarification on any step just let me know and i will do it and that's all from my side today and see you on the next video never stop learning and thank you for watching
Info
Channel: Data Science Garage
Views: 13,036
Rating: undefined out of 5
Keywords: deploy ml model, aws sagemaker mlflow, mlflow sagemaker, build-and-push-container, deploy model sagemaker, aws sagemaker python, deploy ml model on aws, mlflow docker container, aws ecr, elastic container registry, mlflow sagemaker deploy, aws iam roles, mlflow tutorial python, aws docker tutorial
Id: FsoSBsrcx9Q
Channel Id: undefined
Length: 53min 43sec (3223 seconds)
Published: Tue Sep 14 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.