End To End Machine Learning Project Implementation Using AWS Sagemaker

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello all my name is krishnaik and welcome to my YouTube channel so guys in this video we are going to create an end to end machine learning project with the help of Amazon's Sage maker so if you don't know about Sage maker it is very much popular used in many Industries who are specifically using AWS Cloud using the Amazon Sage maker you will be able to build train and deploy your machine learning models and even create endpoints and expose that endpoint so that any application can probably use it you know so in this video I'm going to probably show it I'm not going to use the AWS console over here instead I will try to show you with the help of coding uh what our libraries are specifically required we'll also take care of that step by step we'll see it this will be an amazing video for every one of you who is really interested into this so please make sure that you watch this video till the end because I'm planning to create this videos with respect to two parts in the first part I will show you with the help of Jupiter notebook and in the second part I will try to to probably create an end-to-end project wherein we follow the entire life cycle so uh quickly let's go ahead without wasting any time and let's see this particular thing okay now here is my entire code I have probably taken a data set over here which is called as mobile price classification data set and we will consider this particular data set and then we will try to and create some models and this is a classification problem statement we need to probably find out what is the price range of a specific mobile based on all these features that we have so this is the problem statement that we have actually over here right now the first thing what you really need to take care of is that I will be enabling enabling the AWS CLI so for that what you need to do is that just go ahead and download uh AWS CLI that is the command line interface so that you can probably write any kind of commands from the command prompt itself right now in order to install this in the windows or whatever machine you specifically have just go and click on this first link okay search for AWS CLI and then here you will be able to see in Windows you will be having this MSI file so this is just like an executable file click on this download it double click it and just install it right so through that you know what will happen AWS CLI will be enabled and installed in your system okay so this is the first step now we we need to configure you know configures considering a specific user right so here if you don't know about here what you can basically do is that you can search for IM and here you can probably create your own users right so based on this particular user like over here you can see many users are basically there I can probably click and show you one of the user how it is created so this is the sagemaker user and here I will be providing the administrator access okay so in order to create the user just go ahead and click on add user provide the username like I will be showing I will be writing something like Sage make a example okay so this will be my user just to show you right and then I will be providing the user access to AWS Management console click on next okay and this will take you to the identifier or if you also make sure that if you also uncheck it that is also fine then you go ahead and click on next then what you need to do is that go ahead and provide the attach policies directly and here you need to provide the administrator access you know administrator access will give you the access to most of the things that you specifically do everything is not required you can also go ahead and just take the SAGE maker part also Sage maker uh Services access also but here I'm just going to give you the administrator access click on next okay and over here also you can see permission boundaries this is there and you can also go ahead and create the users right now before this what I would also like to want is that probably I want to go ahead and create my own uh secret key for this particular problem statement so I will just go ahead and click on create a create user now once my user is basically created so here you will be able to see a specific user let me see whether that user is there or not so I will just reload this and uh uh Sage maker example is my username right and here what I can do I can go to the security credentials okay and I can probably create my own access keys this access key will be super important and you have to do it for the first time I will go ahead and create this uh and then I will just go ahead and probably click on CLI if I want otherwise I just keep an uncheck of this you know by default you can also go ahead and just go ahead and click on the next so let me do one thing let me again do this particular step create the access key create on next so I will select the command line interface click on next and before that just confirm it and no need to create any description tag value and here is my access key what I will actually do download this CSV file this is super important because whatever secret access key and this access key I will be having I have to copy and paste it over there okay probably if I open this you will be able to see this right so this is how my access key looks like and the secret access key looks like okay so don't try to use this anyhow because I'm not going to I'm going to probably delete this after my video is basically created you know so I will go ahead and take this now open my anaconda prompt okay and here what I'm actually going to do I'm just going to write a AWS configure okay and here I'm just going to give my access key ID which was the first one which I had actually copied and then again I will go over here and probably create my secret access key this I will just copy and paste it over here right and what is the default reason you will be seeing that Us East one is the default region in this AWS so I will press enter output format I don't want any default so I will keep it like this okay so this is the first step that you really need to do that is I will just write down in the notepad Okay just a second I will just write in the notepad and this specific notepad what I will do the first step create an IM user provide the administrative access right administrative access okay the second step is that AWS CLI I will try to enable it because all the coding I will be doing over here in the vs code and it should be able to uh you know run the instance create an instance in this Age Maker itself from this particular CLI that is the command line interface right so awcli for this we need to configure it right configure it to whatever secret access key you specifically have okay so this is done now what I'm actually going to do is that I will go ahead and open my vs code so here is the entire code over here I will go step by step what all things we have specifically done but before that uh let's go ahead and create my new environment so in order to create the new environment what I am actually going to do over here is that I will go to my file or I will just go ahead and open my terminal so new terminal over here and for this new terminal what I will do I will just go ahead and create a command prompt and start my work okay so right now it is going to some development environment so I will just say conda deactivate okay and I will just go ahead and create my I will just go ahead and probably create my new environment so in order to create the new environment I will write conda create minus P let's say my EnV is my environment name I will give my python version is equal to 3.8 okay you can also have 3.9 it is up to you but right now I'm just going to create this specific environment so here it is you will be able to see that a new environment will start getting created so I will go ahead and create click on y so here what will happen in this same folder location I will be having this my EnV environment right so every installation that I'm specifically going to do because later on I'll try to convert this as an end to end project I will show you over here the entire project I'll run and show you everything I'll show you over here itself but again to create an end-to-end project you need to create the data ingestion layer everything and all that I will do in the part two so this is the first part as we are focusing on understanding things okay so now here is my environment created now what I will do I will write conda activate my EnV slash okay so now I'm inside my uh my EnV environment so now whatever installation I did specifically want to do I can probably create a requirement.txt over here so this is my requirement.txt and let's say that I want sagemaker skycad learn pandas numpy and one more library that I can probably say is IPI kernel Okay the reason why I write IPI kernel because I will probably require it for my Jupiter notebook again not necessary you need to write it so I will say pip install minus r requirement oops just a second I will just clear the screen just a second okay so pip install minus our requirement.txt so done ah minus r okay I do not have to give the space right so minus r requirement Dot txt and this installation is basically taking place okay and this will take some time based on internet speed but this is the first thing and here I'm using sagemaker the sage maker and since I've used awcli sagemaker is also a library that we are specifically going to use Sky Catalan obviously you know because to apply any kind of machine learning algorithm uh pandas numpy and IPI kernel is basically for a kernel for a Jupiter notebook right so all these things we are going to probably import or install that is the basic libraries that we specifically require perfect then once this is done we will probably start from here okay and we will go step by step so ah every step we will go ahead and see what we are specifically going to do and what things we are not going to do okay every step by step like how we will be starting how we will be going where you have to probably create a bucket in the S3 bucket like how how what naming convention you can specifically use because once a bucket is created it cannot be deleted okay and uh throughout the entire world reason right the bucket name should be very much unique okay now what I'm going to do over here till then I will go over here and search for S3 bucket so S3 bucket if you don't know guys if you probably want to use any storage space specifically for a longer period of time uh you can basically create this kind of buckets okay so I will just go ahead and create a new bucket and my bucket name I will give something like mob okay more bucket mob pocket for sage maker okay something like this I'm just going to give me this naming convention this is UE Us East one this is my bucket name don't need to select anything as such and just go ahead and create this particular bucket okay so that bucket name you have basically taken there are a lot of bucket names that we are using and this uh again for some or the other work we have specifically used now I'll go back over here till then let's see whether this is done yes this is done so I will clean the screen and I will close it okay now this is the first thing that we need to do here we will go ahead and set up our kernel so this is my python kernel we are actually setting it up now one thing you need to focus on is that here we have to give the bucket name the bucket name will be unique so I'm going to provide more buckets Sage maker name right whatever bucket we have specifically created now to begin with we are going to import Sage maker we are going to import train test split I hope everybody's no strain test plate this boto 3 is a library which will specifically used to connect the S3 bucket itself along with that I'm using input pandas let's see whether everything works fine or not it may give you an error because I don't know whether I've installed boto3 or not but if it is not installed so here you can see using bucket mob uh Mob like this is for the mobile classification right bucket CH maker so this is the name we are basically setting up for the bucket itself here you can see let's go step by step we are creating a client boto 3 client which can actually communicate with respect to the S3 bucket then we are basically creating a session Sage maker session then we are writing session.boto session dot region Name by default whatever is is the region name we basically take it is U.S east so that it will be taken into this is my bucket name and I'm giving the reference of my bucket name with respect to this so anything that I'm probably going to store let's say I want to do a train test split and store that particular data inside that bucket I will be storing inside this bucket itself so that is the reference name that I've actually given over here the next step over here obviously everybody knows this is about reading the data set and this is how my data set looks like I have battery power blue clock speed dual SIM 4G into memory everything and the last the output feature is basically a price range so this becomes a classification problem okay now what we are going to do DF dot shape uh you can basically see that I can also normalize this just to see the count of value cons some of the feature engineering and all I'm not focusing much on the feature engineer feature selection process you can definitely do that uh two things one I'm connecting it to the S3 bucket and then probably I'll also show you that how I will be training this entire models in the uh sagemaker itself okay even creating an instances part also I'll show you okay then this is my DF dot columns so this is my all my all my features that is specifically there then DF dot shape then DF dot is null dot mean we are just trying to find out the percentage of values missing it is zero no need to worry so this is my features all my features over here are in the form of list that I can see over here this pop minus one basically says that remove just the last feature so here is my last feature price range and this I'm just trying to divide that into my independent on dependent feature over here and this is my x dot head right so these are all my features over here here with respect to this right so here it is all my independent features and Y is basically having my dependent feature then this becomes my y dot head uh one two categories are there if you probably want to see ah zero low risk one high risk something like this this is just like a price range no need to worry about anything as such so x dot shape basically means the hair you have 200 records 20 features y dot value underscore counts basically means you have this many number of categories 0 1 2 3 and every that number of records for this are equally proportional right 500 500 500 so it is not an imbalanced data set so we don't have to worry about anything now the next step over here again you know this right you need to do the train test split I have explained you so many times with respect to this how to do the train test plates I'm just taking this X and Y and doing that okay uh I will tell you where is the most important thing then this becomes my extreme shape extra shape this is there 1700 records in my training data set 300 records in my touch data set then I'm actually converting this into a data frame right data frame pd.dataflame and on this same labels uh same label I'm basically going to create my y train data basically I'm just putting a white range data over there okay so here it is and this becomes my uh with 21 features so that basically I'm creating the entire data frame with all those things as my training and test okay then again trends.head I think these all are very simple is null dot sum okay again very simple and this is there okay so these are some kind of any kind of feature engineering that you specifically do handling missing values anything that you specifically do in fee changing till here you can probably do it now what I'm doing I'm probably saving this data set into version one dot CSV with respect to the screen and test version one dot CSV and X is equal to false why I am saving this file because if you probably see over here there are two files that is created because I want to push this both the file in in my S3 bucket the same S3 bucket that I have actually created now here what I'm doing I am taking this prefix inside my folder like my main folder was what if you probably see over here I had given a name right if you probably see this was my main folder mob bucket second stage maker right now inside this what we are doing over here see this this is important okay I'm creating a prefix I am creating a folder sagemaker mobile price classification SQL on container so inside this what I will do I will use session dot upload data what is this session this session is the first thing that we made right the bottom the boto3 client session right Sage maker session here you can see both A3 dot client to sagemaker I've used and the same Sage maker library that I've used this is the session of this right so that will be able to use the specific Sage maker so this session only I'm basically going to use over here and I'm going to just upload this particular data right when we upload this data so this is the functionality that is present inside this session variable okay the path I am saying that okay fine this is my file path the file path here you can basically see train V1 dot CSV bucket is equal to bucket what bucket you have over here the same bucket name that I have actually given in the initial part right so if you probably see over here this is basically my bucket here you can see this is my bucket name right mob bucket Sage maker so that bucket name I have given over here and after this bucket I am saying that you create all these folders inside that bucket sagemaker mobile price classifications container okay and uh this train va1 dot CSV will be uploaded and the training path will be there okay that basically means the path over here right so here you can also see if I go ahead and print okay crane path okay and if I go ahead and print the test party here you'll be able to see that okay now similarly session dot upload test dot uh test file also I'm trying to upload in the same bucket with the same prefix only the file name will be changing so let's go ahead and execute this now here you'll be able to see it is uploading it is uploading so this becomes my entire path see S3 mobile bucket uh Sage maker sagemaker slash mobile price classification scale on container and finally I have my train V1 dot CSV now we will just verify whether this file is present over here or not so let me just go and see this oh where is my bucket there's so many buckets created away so this is my bucket so here you will be able to see the object is getting loaded so inside sagemaker you have mobile price classification you have a skill on container sorry you have excellent container and here you have this test v1.csv and test train V1 dot CSV right and just now it is uploaded see June 23 this timing and all right so perfect till here you have absolutely done it really well and this is basically a train pattern test part now till here what we have just done this is basically the data ingestion phase so you have taken a data you have done the train test split and you have uploaded in the S3 bucket right so later on then I probably convert this into an end-to-end project this entire instruction of code that I've actually written I will specifically write where in the data ingestion part okay and that is where I'm also using boto3 the session that I'm creating and the managing the session uploading the data everything right this uploading the data will be going in my utility uh utility functionality that I'm specifically creating on that project so that kind of end-to-end project I have already created it right so I will do that also don't worry about it okay anyhow I have written the entire code for this also so you don't have to worry now this is super super important okay this entire code write script file script.py so here we are going to create one script.py this script.py what it will be doing step by step and the script.py is available in sagemaker documentation also okay I have not written by my own I've just taken the documentation I have probably changed the libraries over here like random forest classifier and all uh and then I have probably written it right now what this entire script.py file is all about right and whenever we write like this right X in this way write file script.py in short it is going to create a file which is script.py and inside that it is going to have this entire code okay now what this code is all about let's see this okay over here we have created a function which is called as model function and it is saying job lab dot load of this particular model uh whatever model is there from this particular directory right so this that model directory will be the directory because I'll also be saving the model in my bucket everything will you will be able to understand okay so this function is basically loading the model okay now you see we'll start the execution from here whenever you are training your model in the sage maker right it requires some by default arguments okay some of the arguments is required by the AWS sagemaker Amazon sale Maker Now what all arguments is specifically acquired now see this argument that I've sent right parser arcpash dot argument parser so we are basically taken some argument parser over here this argument parser is basically the arguments of this particular Library random Forest classifier like what is the n underscore estimator what is random State everything so these two arguments I have given from my side right but by default from the AWS sagemaker side we also need to pass this argument what is the model directory what is the train what is the test what is the train file what is the test file okay so train file and test file we know what is the name right by default right so over here you'll be able to see over here you'll be able to the model directory train and test this is what is the file name or what this is the folder name that we specifically specify by default okay so here you will be able to see SM underscore model dot underscore directory so this is basically set up in the environment only in the AWS Siege maker okay so if you probably see this SM underscore model underscore directory SM underscore Channel underscore train at sem underscore Channel let's go test okay so this is basically set up uh in the environment itself no no changes you don't have to change it because see again I am telling you this entire script.py file I have just copied and pasted it over here okay from the documentation that is given and over here you can see sk1. version joblift.version this is there now here you can see we are reading the train file okay we have given the path arguments.train arguments.train file so this is my train path and this is my file we are reading it okay and similarly whatever path over here for test and test file we are given that also we are reading it then again all this same step we are basically doing the train test play everything as such and then we are applying the random Forest we are doing the fit and all these things then we are you know dumping the libraries sorry dumping the model after the model is basically trained in some kind of format and then we also are doing the prediction from the X test then we are trying to find out the test accuracy along with that we are also trying to find out the classification report right all these things is that so this is my entire script py file so it is having the name function so it is just going to execute line by line with respect to the end of the function right so here I will just go ahead and execute this as soon as I execute this here you will be able to see script.py file is created see and this file is only going to get used by the Sage maker you know how it will be getting used just follow this it will be you'll be able to understand okay now in sagemaker by default you have a library which is called as SK learn Capital scalar okay it uses some framework version this and all okay and here you have to first of all give the script.py whatever py file that you have the for the entire training purpose and this is specifically the role that we are probably going to give this role how from where it is coming I will show you this role also so here if you probably go ahead and probably search for I am okay IAM okay I am the users right whatever users we had we had created right so here you will be able to see because this is the changes that you only have to do okay let's see so I am users okay and here uh let's see let's see where is that Sage maker example um mobile okay let's see if any example okay so here I'm just going to click this and groups groups groups yeah so here you can basically see this is the Arn right and here with respect to the code also that you are able to see this is nothing but this specific role you need to specify right so this is the role that I'm actually specifying you just need to copy from wherever you are probably taking this uh just copy this and paste it over there right so here we have just pasted it over here with respect to this rules okay then instance count how many instances you want to open you need to provide this inside is SQL only see what this SQL on estimator is basically doing you can see that it will create an instance in the sage maker like it will assign a machine right what the machine type will be like this ml MN ml dot M5 dot large so this is one of the instance type see from Sage maker also you can manually do it and I'm showing you from the code if you probably go and check out my sagemaker playlist there I have shown you how to do it from the sage maker but here I'm specifying the instance type at ml dot ml5 M5 dot large okay and then framework version is like this framework version whatever framework version was present in the documentation okay base job name I'm just trying to provide this specific name as my folder after the model is trained hyper parameter is 2 and then stimulators is equal to 100 and random state is equal to 0 use spot instances is true max weight is this much Max Run is this much so these are some default values that I have actually set it up okay and this is what is basically going to run in the sage maker okay it knows where the script.py file is also so I'm just going to execute this this is done and now all I need to do is that use this SK loan dot estimator which I have actually created over here dot fit on train path and test path and I'll say weight is equal to true now this is where we are going to launch the training job as an unsynchronous call on synchronous call basically means it will go ahead and probably create an instance start creating an instance in the AWS Sage maker and start okay so let's see this I will just go ahead and execute it now now this fit process I have actually started using the provided S3 resources you can see it created a training job with this specific name now till the training will be happening it will take some amount of time okay what it will do is that I will just go and search for sagemaker let's see and I will go ahead and click on this and here I will go and click on training okay training jobs now here you can see this is the name that you can see 23 6 20 23 right right now is the same date that you can probably see over here this has started and it is in progress that basically means the training has basically started so here you will be able to see the same training is basically started it knows where the S3 bucket is it knows each and everything it is now it will try to open an instance of this type right of this particular machine ml dot M5 dot large you know and it will start the training process okay now we will wait for some time till the training is done and once that specific training is done the next step what we'll do I will try to show you that how we can deploy this because the next code is all about that only okay uh this all codes that you'll be seeing over here it will show you that how you can deploy the model I will show you that how where the file is basically created and how we move it to this three bucket as a model file and all everything will be discussed so let this happen it will take some time like five to six minutes for this entire training because we are launching an instance also and then we are training a model also okay so just let wait for some time okay so guys the finally the training is done here you can see step by step what is happening see uh the created a training job with this name so where did we assign this specific name here you can see RF custom scale learn and then it started the training job it prepared the instance for training it downloaded the input data from the S3 bucket the training image download got completed here you can basically see no gpus detected if we can also assign gpus from here right in this scale on SK loan right we can also assign gpus uh probably I'll show you that when I probably do a deep learning project okay and then here you can see invoking user training script that is this training script.py okay no GPU is detected invoking user input this is the training environment entirely all the hyper parameters everything is basically taken training seconds and billables seconds is 108 okay so this is what uh we have specifically done and here you'll be able to see that my model will also be ready and if I probably open this here you can see this is also completed right and this is my entire training information right the same thing uh this is the role that is there you can see billable seconds every all the information it is basically shown also over here right now you need to understand one more thing here you can see the output data configuration so whatever model is basically created that is also saved in the S3 output right so if I probably go ahead and show you this S3 bucket right uh let's let's go over here so this is my bucket and you can see the name of the three bucket is nothing but sagemaker us is one five six six three seven three okay so Sage maker let's see where is the bucket U.S east May 4 2023 not this one let's see inside this and here you have this right so this is one of the file 23 6. it's 23 6 yeah this is the file I think yes this is the file uh of 23rd so here you can see inside this you have the output folder and you have the entire model right so what it is done is that the training is basically done from there and then the output in the S3 bucket it has also dumped the entire model right this entirely things is basically done with the help of this Justice code you know over here right so and everything is basically done over here and automatically the bucket is also created and it is basically deployed right now you can also use this code to probably see more information regarding your uh entire training you know so here what I'm actually going to do is that I'll just execute this see what is happening over here ah scale on htmeter dot latest training job dot weight that is doing then it is saying that describe this training job with this scale on dot estimator.latesttrainingjob.name model artifacts and S3 model effects okay so it is basically just saying that where all your model exists okay and this is the folder location that is basically given over here right all this information is there and if I probably go ahead and execute this artifact so this is the entire path here you can see the model where the model is exactly so that that entirely code is basically available over here also and if you are very much familiar with machine learning things and all you can definitely know all these things right now finally uh one more thing that uh we are specifically going to do over here is that uh how do you now see this in this also what we are basically doing we are uh okay one more thing that has basically happened over here is that when we are preparing the instance for the training the training image download is completed raining is in progress uploaded generated training model and here now you can probably see in this RF custom scale learn I have my model output okay so this is very much clear I hope uh there is no much uh different when when we try to compare that right now here you can see from SK learn from sagemaker.sklon dot model import SQL on model here I am going to probably create another folder location which is called as a scale on dot model because I need to also do the deployment okay here I give my model name so whatever with respect to that particular type so that basically means from this particular folder location I'll just I'm just moving that uh into some other folder location that begins with custom SQL on model we'll be able to see that okay so let me just go ahead and execute this once okay and this is basically my model over here sk1. this now the reason why I'm putting inside this because I want to keep a copy so that I deploy the specific model okay I deploy the specific model as an endpoint okay so now for deploying the specific model as an endpoint see guys this code is fixed almost okay no you don't have to worry much about this because this is also available in the documentation so what I'm actually going to do is that I'll take this end point name use the same folder location which I have actually created over here see over here so if you probably go and see my model underscore name okay so this is my folder location okay in this three bucket I'm going to take this and we are just going to use model.deploy and I'm deploying in this specific instance as an endpoint okay so once I execute this it is basically going to take this as an endpoint name okay and then it is creating a model with this specific name and it will start doing the deployment now this deployment will be super important because this same predictor dot predict when we will do right we will be able to predict for any new input data okay now this particular code is specifically doing the end point okay so here I'm just showing you this so this is my end point and point deployment okay and point deployment right so here you will be able to see that it will take some amount of time again like how it took probably over here somewhere around four minutes of time right that is just for training a model this is for deploying the model right and this deployment is basically happening now okay you may be thinking Krish why did we specifically use this uh over here in the S3 bucket if you probably see this right I will just show you okay so here you can see this custom folder looks custom scale on the source directory tars.z right this this is basically created I think it is for the same uh 23 6 right so same same thing right so the same model I basically used I've created an additional folder over there and from this only the deployment will happen to the end point okay and what we specifically require only this star directory that we specifically had right that is what we required so from this what we are doing from from this output folder we took this we put inside that particular folder and from there we are putting it as an endpoint just to keep a additional copy that's it okay nothing nothing else so if you probably also execute it you will be able to understand it okay so this will probably take some time because see why this is taking time because we need to start an instance of a machine and then probably do the deployment okay this is more of a deployment kind of thing right and once this endpoint is done then you will be able to see the endpoint name then for any data you just need to write predictor.predict and give that a particular input data in the form of list you will be able to get the output okay so we'll wait for some time till the deployment happens again it will take three to four minutes again like how the training actually happened and the main reason is that we are creating an instance and then doing the deployment right so yes let's wait for some time and then I will show you that how we can do the prediction so finally guys the deployment is basically done and now the end point this predictor is basically getting exposed to the endpoint that basically means that from this the help of this predictor you will be able to probably do the prediction so if I probably see what this predictor is right you'll be able to see that it is an SK learn predictor and always remember why we are specifically using this SQL on predictor this is basically whenever you create an endpoint from the sage maker this is the return type that you probably see and that is what is basically provided in the ews series maker itself right so now you have your endpoint in this particular variable all you can do is that at any point of time let's say that I have taken this all data right I can take all this data and probably do the prediction with respect to row by row okay so here you can see the first two records I'm trying to predict it and this will become my output right two outputs that I am getting the first record output is three and the second record output is zero right so this is basically the type based on the input so this was my first first row input okay and this for this first row input it has predicted three and for the second row input it has basically predicted zero right so this is in short what is basically happening and this entirely thing is basically happening in the AWS Sage maker how I am actually saying it here you can probably see okay so this is my training jobs and with respect to the end points also you'll be able to see so this is my end point over here right the same endpoint and here is what is my model name everything is there so this is how my endpoint is created this is the URL with respect to the invocations right and from this particular endpoint only I'm actually giving the input and getting the output right so uh step by step uh see not everything you have to probably do because in some complex data set you have to perform your own feature engineering the training and the inference part you can definitely use it from the sage maker itself right and this was about the endpoint configuration here you can see endpoint configuration is here and this is your end point now at the end of the day guys remember one thing is that this is all our things are chargeable right so it is always a good habit that that you delete this specific endpoint right just by providing your endpoint name so once I probably use this smo23.delete endpoint you'll be able to see if once I go over there now this end point will get deleted so that not much charges should be there right so this was just a clear idea about how you can probably run this uh the only thing that specifically changes is feature engineering and all guys remaining all you can probably how to train a model just by using this as scale on and this is just a machine learning models right I'll also show you deep learning how to set up this role I have told you how to set up the script.py in the script.p why you can also write your any kind of code let's say you want to perform hyper parameter tuning can basically perform you can make this quite generic okay and then if what are things you can basically play with right right your instance type name you know remaining everything will be same right so I hope you got an idea now in my upcoming videos what I'm going to do I'm going to convert this entire notebook code into an end-to-end project where the data ingestion will come like how we create a entire packages right the source folder uh every components from data ingestion to data transformation to data validation to model creation model trainer everything then probably model deployment part also that also I'll show you okay and that is what I'm actually targeting in the next video so I hope you like this particular video this this was it from my side I will see you all in the next day have a great day thank you wonder all take care bye
Info
Channel: Krish Naik
Views: 94,368
Rating: undefined out of 5
Keywords: yt:cc=on, AWS, SageMaker, machine learning, end-to-end project, implementation, tutorial, data preprocessing, model training, model deployment, AWS services, cloud computing, AI, artificial intelligence, deep learning, neural networks, AWS SageMaker tutorial, AWS machine learning, AWS SageMaker implementation, krish naik machine learning
Id: Le-A72NjaWs
Channel Id: undefined
Length: 40min 24sec (2424 seconds)
Published: Sat Jun 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.