End To End Machine Learning Project Implementation With Dockers,Github Actions And Deployment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video we are going to start our first practical implementation already we have covered up the theoretical part of linear regression we have understood the maths in depth intuition we have understood about gradient descent we have understood about the performance matrix that we are going to use like r square and existing r square now let's go ahead and implement a project uh with the help of linear regression and we'll try to see that okay whenever we have given a data set and this is a very simple project guys just to begin with so that it will actually help you to build up some amount of confidence okay so first of all what we are going to do in this is that we are taking an amazing data set which is called as boston housing data set and the main aim of that particular data set is that we really need to predict the price of the house based on various features now we are going to see what all features are there and what all things are not there which are the important features and all we'll do lot of eda will try to come up with some conclusions and then we'll try to implement a linear regression machine learning implementation all together right so to begin with uh let me quickly make some cells so you can use shortcut like escape a so this is a jupyter notebook over here so let's begin first of all what we are going to do is that we are going to import some libraries like import pandas because pandas are super important in this import and i'm going to write each and every line of code and apparently try to explain you import numpy's np and some more libraries i want is that import matlab layer dot pi plot as plt so this is the library and for inline display of the graphs or any visualization that we do i'm also going to import matplotlib inline right so all these things are definitely there so i have imported all these things now to begin with what we are going to do is that let's load the data set okay let's load the boston house pricing data set now this data set right now if i really want to think about it it is already present in a library but we'll try to see as we go ahead we'll take up different different different problem statements and we're going to make sure that we'll understand about uh you know from different different files or different different sources we'll try to take out the data set and apply machine learning algorithm so today in this video we are going to use from scalar because scale learn also has a lot of data sets all together within themselves so i'm going to import the data set and i'm going to say import load boston okay so there is something called as load boston along with that there are different different data sets which you can definitely explore then this will be my boston underscore df so here uh let's go ahead and write boston's for df i'm just going to call this load boston and quickly print it uh first of all let's see what is the type that we are getting with respect to the boston underscore here that is nothing but scalon.utils.1 so this bunch is nothing but if i probably write just dot ease you'll be able to see that i'll be getting various keys right like data here you can see target feature names description and file name okay everything will make sense uh you know once i go ahead and see all these features okay now first of all i really want to see what is this descr this basically says that this is a description of the entire data set so let's check the and always it is a good way that you should probably write each and every comments whenever you're doing any kind of coding so let's check the description of the data set to begin with okay so here you will be able to see the description of the data set all you have to do is that just write print whatever is your information like this boston underscore df here i'll not say df also let me just name it as boston because uh df we basically say it for data frames and right now i'm taking out the entire data set so no need to write df so print boston dot d e s c r so once i print this here you'll be able to see the entire information about this particular data set so this is a boston housing price data set number of instances that basically how many number of data points are there they have 506 number of attributes there are 13 numerical or categorical predictive median value attribute is usually used as a target here you will be able to see various features that are present in that specific data set like ci crin this basically means per capita crime rate by town so in that the house in which town it basically exists how much per capita crime rate is and obviously if this increases the price will definitely decrease then you have other features like zn proportion of residential land zone for lots over 25 000 square feet then you have indus proportion of non-retail business seekers over town cash charge river dummy variable knocks so if one if uh track bounce river zero otherwise knocks basically same nitric oxide concentration again if this is more obviously basically in that specific town you'll have more amount of pollution right then you have rm average number of rooms per dwelling age this rad tax pt ratio b and stat so you can find out all this information pt ratio basically means pupil teacher ratio by town uh l start basically means lower status of the population median value of owner occupied homes in thousands dollars basically okay so that is giving you the information of medv okay so just check out all this information and if i go forward right how come how do i see uh what are the data with respect to this specific feature and what is the output feature output feature is obviously the price of the house what are the feature names so what i will do is that quickly just go ahead and write print boston underscore data so this will be the data that is given with respect to all this information right so here you will be able to see what are my input features these are all my input features and for this input feature i specifically have my input data that is given in this way right and if you probably try to see this this will be a nested sublist so this basically indicates that all the feature values for the first record will be present over here for the second record will be present over here now this is what data basically says if i really want to find out what is the price so here you can basically print boston dot target so this will basically be the price of the house based on all the data points that i have and this will be a single list that is simple similarly if you really want to go ahead and see what is the features right so here you can basically boston dot feature names so here are all my features that are available over here right so uh these are some of the information regarding the data set it is always important to understand the data set and i hope everybody has got an idea like what all information you are able to retrieve from this and again you can also find out this this is basically also present in this particular repository uh this is the url over there and the data set was taken from so and so library so here you can basically see all the information on regarding the data set itself right so now let's go ahead and let's try to do some eda in probably in the next video but here i think introduction with respect to the data set and understanding of the data set is very much clear for you okay so yes let's go ahead and also you can also convert this entire data set into a data frame which we will do in our next video so yes i'll see you all in the next video thank you [Music] so guys now in this video we are going to prepare the entire data set in our previous video you understood about the entire data set with respect to the boston house pricing data set so with respect to the preparation first of all what we are going to do is that let's go ahead and create my data set altogether and this data set here i'm actually going to take and i'll be using pd.data frame this will actually help us to create the entire data frame altogether with respect to all my independent features so here you can see all my independent features are there right in boston.data so if i probably write pd.dataframe on boston.data here you'll be able to see that i will be getting a data set which will look like this right so yes obviously if i wrote write dataset dot head it will give me the top five records so these are my first record second record and third record fourth fifth like this all the records are available now one thing that i feel over here is that because over here i still don't have my column names right now if i really want to put up all the column names then how should i do it right so there are two ways one is that when we are creating the pd dot data frame here obviously you have something called as a column features now in this column features what you can actually do is that you can just write boston dot features name right so if you write boston dot features name here you will be able to see the entire data set with the column name so here obviously i can see all my column names and uh uh the entire data set but still uh we are missing the uh the output feature these are all my independent features right at any point of time whenever i probably see right in my data set there will be one output or dependent feature and remaining all will be my independent features so how do i put up my output feature in this specific data set my output feature is basically present in boston.target so what i'm actually going to do i will take up this data set and i will create a new field which is called as price and this price will basically have my output feature and this i'm going to assign it to boston dot target right so if i assign this it to boston.target you will be able to see that my data set will now have if i write probably dataset.head head will basically give me the top five records so here is my output feature with respect to price also right so here you can see that we have prepared the data set now this is my complete data set over here now with respect to this data set you know what we have to do is that quickly do some kind of quick analysis you know and that analysis is super super important now first of all what i'm actually going to do is that i'm going to use some of the inbuilt function like data set dot info so if i write data center on info it is basically going to talk about like what are the data types of my all the columns that are present over here and here you can see that all it is floating 64 bits right and it is obviously non null and yes one more important point is that we also need to check whether there is any kind of missing values or not right so here these are some basic information about your data set it talks about like your specific columns what data type it is let's say if it is a categorical kind of data type then we have to do a different kind of pre-processing but here since this is just our first problem statement i've just taken a simple one and we'll try to implement this okay so here is my data set dot info uh with respect to this i have all the information that is present over here this is fine now since this is already in the floating point you know i hope you know about percentiles like 25 percentile what is the medium 75 percentile so if i really want to describe my entire data set so i can also use something called as or i can also say summarizing summarizing the stats right summarizing the stats of the data okay so in order to do this i will just use the data set dot describe so describe is one function and this will probably give you the count mean standard deviation mean 25 seventy five percent and max remember one thing is that suppose if you have features that are like categorical features so in this describe you'll not get tracked category features all the input uh features or all the independent or the output features that are numerical right that will be considered while finding out your entire 25 percent 50 percentile 75 percentile and all the other information so in short here we are summarizing the entire statistics okay now till here we have done this we have prepared our data set we have seen some of the information with respect to how to summarize your data what to how to see the generic information uh whenever you get a data set you have to first of all check the missing values okay so this is a super important step check the missing values okay i'll again say that this step is super super important whenever because the first thing is that whenever i have my data set i really need to check the missing values so in order to do this i'll be using data set dot is null function if i basically use dot is null it is just going to tell me that which rows are true or false if there is any any value that is true let's say with respect to this particular feature this is true that basically indicates that on the record of 501 this particular value is missing right so obviously most of the values are sum but still i'm not able to see all the records over here because some of the records are missing so what i can do i can also apply an aggregate function called as dot sum so it basically says that okay fine there are no missing values as such so since this was a simple problem statement as i took my first first problem statement so i had actually done this later on when we'll be seeing different different problem statements we will try to see whether there is any kind of missing values or not and we'll also see how we can replace this particular missing values so obviously no missing values over here we are good to go okay now uh what we are going to do is that after we have done this you know we are also now going to do some eda right exploratory exploratory data analysis okay so in this exploratory data analysis we are just going to analyze some information of a data now you know that over here i don't have any missing values i've seen okay this is floating point numbers right and obviously when i see this floating point numbers the best thing is that in a case of a regression problem statement the first thing that you should definitely run or first code that you should definitely run is related to correlation correlation is super super important in um in any regression problem statement because we really need to find out how the independent features and my output features are correlated if they are highly positively correlated or highly negative correlated then this basically indicates that our model performance will definitely be high okay this is one of the super important property with respect to linear regression here first of all i'm just going to implement some of the things implement completely and then probably i'm going to make again a theoretical video to make you understand about the assumptions of linear regression super important now in order to do this uh obviously uh what is the technique that i'm actually going to use here i'll just write pandas pd so let's see pd dot so i'm just going to use pd dot okay no need to use pd so if i can write df dot core relation so here if i write df.corr so sorry not df it should be dataset because that is what is the generic thinking right we always try to write df over there so here you will be able to see the correlation now if i probably see the definition with respect to this the by default correlation that you'll be able to see is something called as pearson okay now pearson correlation pearson correlation basically checks the relationship between two uh features x and y in such a way that it will try to say that whether it is highly positively correlated or negatively correlated or it is not at all correlated usually the output of the piercing correlation range is between -1 to plus one the more towards the value towards plus one more positively correlated it is more the value towards negative one the more negative correlated it is and if it is near to zero hardly they are correlated okay so this is super easy way to basically check how the correlation basically exist now here the two types of correlation that you really need to check the correlation between independent feature and the correlation between independent and the dependent feature if there is a high correlation between independent features at that point of time you can remove one of the independent features because that is what is called as multi-collinearity so in linear regression also we really need to check what is multicollinearity and that it basically means that if my two independent features are very highly correlated let's say they're correlated by 95 percent 96 percent you know or negative 90 or negative minus 0.9 5.96 because here you will be getting the value between minus 1 to plus 1 so if that is highly positively correlated or highly negatively correlated what we can do with respect to the independent features we can drop one of the independent feature and we can just take the other one the other correlation that we really need to focus on is that correlation between our independent features and our output feature that is price now here you can see if i probably take this example crim and price these are like negatively correlated by 0.38 like minus 0.38 so somewhere around 38 percent similarly if i see zn with respect to price it is nothing but 0.36 right now what is let's say what is the crim crm obviously and this will probably give you the entire information crime basically says that it is the per capita crime rate by town so obviously if this increases right this increases your house price will definitely decrease right so that is the reason why price is decreasing by minus 0.38 if the crime increases in a specific town it will decrease similarly all the other features are there so here obviously from some of the features here you can see rm is somewhere around 69 percent correlated positively correlated and this age is somewhere on minus 0.37 negatively correlated so we what we'll do in the next video is that we will try to analyze this correlation we'll try to analyze this correlation by constructing some of the important plots okay some of the plots like scattered plots or heat map or we'll also see some of the outliers whether there is some outliers in the price or not but this correlation is super super important two things that you really need to check one is for multicollinearity uh and to check for multiple identity there are also some other methods which we will discuss as we go ahead but definitely multi coloniality is the relationship between the independent features that you have to check and obviously here if i see right there is no much independent features that are very highly positively correlated or negatively correlated right so here if you could see like in this particular example you can say tax and indus it is somewhere around 72 percent correlated but i will not still say that it is very very highly correlated right so i will definitely not drop any of this feature otherwise let's say if this feature was like 96 percent correlated i could have dropped index and probably considered tax or tax or index any one of them and probably used for training my model right so this is a super important part with respect to the correlation now with respect to this information what i will be doing is that one more thing is that based on this correlation right we can also do a scatter plot so for this i'll be using c bond right and import import c1 as sns and i will be using something called as snh dot pair plot now this pair plot will basically give me the how x and y coordinates like basically how this feature crm and price is basically scattered with respect to different different points right so if i probably write df and let's see whether there are any other features that are present over here or let me just explain this sorry this should be data set my mistake so here you'll be able to see that okay it'll take some time time because there's so many features and obviously with respect to different different features there will be a different different plot uh scattered plot in short that you'll be seeing okay so this is getting executed again some five to six seconds max i guess and here you'll be able to see the entire plot okay now obviously this plot that you'll be seeing let me just open uh or zoom in obviously you will not be able to see much information over here but you can see some of the some of the data that is plotted between two features right they have some type of positively or negatively correlation right so uh this information is hardly visible so i would again focus on this correlation itself because this same value how is this distribution right based on that this is entirely calculated right so this is also one more way how you can achieve visually if you have less number of features you can see it much more clearly otherwise i think this is the information let's consider that this specific feature is there like this is crim okay and my last feature is something price so if you see the relationship between these two it is somewhat negatively correlated right somewhat negatively correlated so here you can see crm and price is negatively correlated because my data points is in the inverted manner right so definitely check out this information now in my next video what i am actually going to do i am going to do lot of analysis based on this information that is present okay so thank you i'll see you in the next video [Music] in this video guys we are going to continue the discussion with respect to this uh boston housing price data set and now we are going to analyze the correlated features now already we have seen that this is how our features will look like we have to really analyze with respect to the independent and the dependent feature uh so here let's say that the crime the crim which is the feature that is related to crime rate and if i probably see the relationship with respect to price if i want to quickly create a scattered plot all i will do is that something i write plt dot scatter and here i can basically use df dot ah my feature name that is crim along with the price right so i'll write df dot uh of price and again this is not df in short this is data set okay so i will quickly uh see to it and uh let's see how the graph will look like and here you can basically see the graph okay this feature is wrong data set okay now here you can definitely see that uh the relationship between crime and price is that as the crime keeps on increasing because in the x axis here you can see that it is having crime so here you can also put some x label and y label so here you will be able to see this will be my crime rate i'm just writing something like crime rate and plot dot y label you'll be also able to see the crime rate plus on the y label you'll be able to see the price so now here you can definitely see the relationship between the crime rate and uh the price uh obviously when the uh crime rate is increasing right or the price will also keep on increasing uh decreasing sorry this is inversely correlated here you can see it is like minus point three eight eight uh and that specific percentage is reversing or inversely correlated so that is the reason why you are able to find this kind of scattered plot okay now uh similarly what you can do is that with respect to some of the features let's consider one of the feature over here as rm and here i can see okay it is 69 positively correlated so if i really also want to plot this i can plot it over here and always make sure that you write all the information and observation that you are able to make it okay so here in my x level it will be rm rm and this will be price right rm i think it is nothing but room uh rm what is the rm feature over here let's consider the rm feature average number of rooms per dwelling so obviously as the number of rooms keeps on increasing your price will also keep on increasing right now let's say that if this is my future if i just have one independent feature and one price and if i try to probably create a regression plot how it will look like so for this i'll be using c bond right c bond and sns and here i can write snh dot reg plot which is also called as regression plot and here let's take what all features are required x feature y feature now in this particular case let me say that my x feature is nothing but r okay my y feature is nothing but price and after this i also have to give my data so here you can basically see my data is none over here but i'm going to basically write data is equal to data set which is my entire feature and if i probably execute it so you can here you can see that the same plot is basically displayed over here and here you can see that we are able to create a simple linear regression line and about some amount of variation will always be there that is also given over here right so here clearly you can see that okay there is a positive correlation that is actually between rm and price now similarly you can also take up any other feature that are present so one interesting feature that i can see is lstat and price this is somewhere on negative correlation right so again to analyze it it will always be better that i will just copy this part and paste it over here here i'm just going to use lstat and this will be my price so if i execute this here you can see obviously this is negatively correlation correlated and you can basically create a regression plot like this right so this is some amount of analysis that you can do again you can check out with respect to other features also let's consider some of the features that are not at all correlated let's say cash chas and price right so if i probably see this and if i try to print it so here you will be able to see that chaos and price now here with respect to this the data points look somewhere here it is something like this it is something like this and there we are trying to create a best fit line now it is super important that to understand that linearity should definitely be there in your data set if linearity is not there in a data set linearity basically means when x is increasing y is increasing or if x is decreasing y is decreasing this kind of relationship should be there or inverse relationship should be there to create a better regression model so usually whenever we have this kind of features that are hardly correlated right it uh it reduces the error of the regression model okay now quickly let's go ahead and do some more things and obviously guys here you have to write down the observations okay here also you have to say that as as the stat is decreasing the price is increasing okay or here also you can basically say as the rms that is number of rooms are increasing price is increasing let's take up one more uh and there is something called as pt ratio pt ratio okay so what about the pt ratio pt ratio was something like uh cha parent to tutoring something okay people teacher ratio by down okay so this was the feature over there so let's go ahead and try to do with pt ratio again i'm going to copy this same thing over here and this was something like pt ratio okay so here you will be able to see that okay some amount of negative correlation is there but again there are some types of points that are available and this information it is also saying that as pt ratio increases price decreases or as when the pt ratio increases price sorry as the pt ratio decreases the price increases so there is an inverse relationship okay now uh these are some of the information that i could take out from this particular data set now let's go ahead and quickly do one important thing that is uh create our models and for creating our models we'll do first of all there are multiple steps that we really need to perform uh that is like we need to prepare our independent and dependent features we need to make sure that we need to perform uh you know after we create our independent dependent features we need to do train test split and many more things okay so let's do all these things and let's see that how we are able to do it okay so the first step is basically dividing our features into independent and dependent features okay so uh as you know this is my entire data set uh and inside this data set i know price is my dependent feature and remaining all are my independent features right so let's quickly go ahead and let's try to uh create this particular data set so here uh i will just name all my independent features are x so here i'm just going to use df.iloc and i have to make sure that i take all my input features and i have to skip the price column okay so here it is so colon comma colon minus one if i do colon comma colon minus one that basically means i'm going to skip up all the independent features sorry i'm going to take up all the independent features and just skip the last column right that is price so this will basically give me my independent features similarly with respect to y i'm just going to use df.i lock here i'm going to take all my features and then not all my features all my data points and then i'm going to probably take the last column that is price okay so if i execute this okay this data df again it should be data set i'm extremely sorry about it okay so this should be my data set and here i actually execute it it's okay guys see if i'm making some mistakes it is always good to do some mistakes then only you'll be able to practice more and more and i'm very happy when i make mistakes because that is the reason i try to find out okay where what problem we are actually facing what did a mistake that i do and probably i will not be continuing that mistake i will go ahead so this becomes my x dot head and this becomes more y okay now after doing this the most in the important step that is nothing but train test split okay so here we are going to do something called a strain test split okay now inside the strain test plate what our plan is see whenever we discussed about the performance metrics at that time you know that whenever we get our data set we should definitely do a train test split we should keep that entire test data separately so that we are not going to touch it anytime because once we create our model and then once we test with that new with that touch data set we will try to see the performance of the model then and there other than that we'll use this entire training data set to see that how my model is basically performing okay so for doing the train test plate i am going to use from sk learn dot model selection and here i'm going to basically import drain test split so everything i'm basically using from sklearn here you have and here i'm going to use xtreme comma x test comma y train comma y test so these are all my features that it is going to create with respect to the train test plate so here you have x comma y and then here you basically have test size or i can also push by test size as strain size or test size let's say test size is nothing but 0.3 that basically means 30 percent i'm going to put over there and some random state is equal to 42 i'm just taking any random state if probably you also take the same random state value you may also get the same train test split okay so now you can basically check out my x train so these are my entire data set with respect to x20 and this is your x test and similarly you can check out your white rain and white test so this test data i'm not going to touch it now for the prediction purpose and for the checking the uh how my model is performing i'll be using the test data right now i'll be focusing more on the training data when i use it okay now in the next video we'll start the model training and before doing model training one very important step that we really need to do is something called as standard scaling okay now what is the importance of standard scalar in this particular case understand guys if you have understood about the math intuition behind linear regression problem we always made sure that we use something called as gradient descent right now when we are using gradient descent right one important thing is that uh as we and our main linear regression is to come near the global minima right and one important step is that every feature that i see over here right this is all calculated with respect to different different units now if this is getting calculated with respect to different different units one important step so that our uh you know converging of that specific algorithm of the gradient descent will take place faster we have to normalize or we have to standardize all these data points to the same scale okay and that should be based on something called a standardized scalar okay so that is what we are going to do over here a very important step which is called as standardizing the data set so here i'm going to use from sklon dot pre processing i am going to import standard scala okay and here i'm going to basically use scalar is equal to standard scala and here we execute this now this is the standard scaling object now all i have to do is that i have to pass and write fit underscore transform i have to basically pass my x training data set okay x training data set and transform it completely right to the same scale so here i will probably get my x strain okay now similarly for the x test also we have to do it but in x test whenever we try to apply we don't have to write fit underscore transform because we are going to make sure that whatever information i have with respect to the training data set and whatever techniques i have applied for transforming it the same techniques needs to be applied to the test data set this is done because to make sure that my model does not know much information about the touch data set okay so here i'm basically going to write transform on my x test okay and once we do this and if i probably try to see my x-string here you'll be able to see all the data points similarly if i go ahead and see my x test here you will be able to see all the data points we don't have to do with respect to the output feature but definitely for this specific step right and whenever we get our new data set there also we have to probably apply transform and try to do the prediction now these are the steps that we did it now in the next video we are going to directly train our model this is super super important steps again understand this can be an interview question they'll say that why do you standardize a data set in linear regression you just have to say that internally we use gradient descent our main aim is to come to the global minima and to come to the global media we have to make sure that all our independent features unit should be in the same scale because of that the convergence will happen quickly okay super important super important step so yes uh what we are going to do in the next video we are going to understand how to implement the linear regression algorithm on this particular data set my data set transformation is ready my data set is completely ready all i have to do is that train my model and try to find out a good accuracy and also make sure that see how it is performing with respect to the test data so yes i will see you all in the next video thank you [Music] in this video we are going to understand now how we can actually do the training of this particular model with the help of this data set that we have recently standardized it and split it into basically first of all we split it into train and test and then we standardize the data set so in order to do the model training again we have to probably go and import the linear regression model so for this i will be going from sklearn dot linear regression sorry escalon dot linear model and i'm going to import linear regression okay now one thing that i have seen from many people is that if you find out any confusion with respect to finding the linear regression model all you have to do is that just go and search for scalar and linear regression so here is the model that we are going to use and by default parameters here you can see normalize is deprecated over here so this all parameters you can basically see it is basically first parameter something called as fit intercept whether to calculate the intercept for this model or not by default it will be true we really need to calculate the intercept model and they also some parameters like normalize this parameter is ignored when fit intercept is set to false if true the regressor x will be normalized before regression but we have already normalized it we have done that suppose if you don't want to do the normalization you can basically keep it as full but it is always a good practice to basically normalize it so here you can see that normalize is also getting deprecated so it is better thing that we try to do it by our side itself okay and uh here are all the other features not much features with respect to linear regression but when we go and learn about the other algorithms like ridge lasso and elastic at that point of time will play with some of the parameters so here obviously you have understood how to import it from scalon.linear model i'm going to import linear regression so this is my parameter that i'm actually going to use so here i'm to use regression okay regression is equal to linear regression and i'm going to just initialize this specific model okay now in order to train the model uh we have to just write regression.fit on my extreme and comma y tray okay so once i do this you will be able to see that regression model is created right now this in short once we do fit so on that entire data set we basically create a hyper plane because understand here i have so many features i cannot say that i'll just be creating a 3d planar lock like i'll be creating a hyper plane because i really have so many number of features that are existing over there okay so quickly if i really want to print the coefficients so let's say i want to print the coefficient because whenever we create a regression plane right whenever we solve a regression problem statement two important things will be there one is coefficients and then the other one is intercept okay so in order to print the coefficients i will write print and here i'm going to basically say regression dot coefficients okay and these are my coefficients over here and this will be equal to all the number of independent features because for every independent feature i will be having a coefficient right i'll be having a coefficient and this basically indicates like let's say the first feature is nothing but crime right that is what we have seen already the first feature is nothing but cri crm rate basically means that with the increase it basically with the unit increase in the crim value there will obviously be this much decrease this much unit decrease in the price value something like that right so if suppose let's say thousand crime is increasing at that point of time you know 1100 dollars will be increased decreasing with respect to the price or whatever the unit of the price is there with respect to that specific unit okay so this is with respect to the uh coefficient and similarly one intercept you will definitely be having so here i'm going to write uh underscore intercept intercept and here this is basically the intercept over here okay and i've already explained you what is the importance of this intercept also as we have gone made okay now uh what we can do is that uh if you really want to find out like on which parameters this model has been trained so one more statement i'll write a comment on which parameters the model has been trained okay so here you can basically write regression dot get params and you can execute it obviously this will be a method so just try to execute it like this so here you will be able to see that copy underscore x is true fit intersect is true and underscore jobs is non normalized is equal to false okay so this all information is there and these all information you'll also be able to see in the scala okay now one super important thing because once we have trained our model you know then we are going to do the prediction of a model okay so let's go ahead and do the prediction with test data okay with the new data also i'll try to show you but first of all we'll try to do the prediction with the x underscore dash that is there so i'm just going to use a regression dot there will be a method which is called as predict and here i'm just going to use my extras data okay and this will basically be saved in my regression underscore predict okay so once i execute this and if i probably like to write reg underscore thread here you will be able to see these all my predictions okay now one important thing with respect to predictions is that first of all whenever we get any predictions right we can definitely compare this with the y test information because for the x test y test is the true value right so how do we check like what all information or how whether this model has actually performed or not okay so i'm just going to plot some of the important diagrams uh which will basically help you to understand that okay the kind of predictions that you have got from the model it is it good or bad okay so first of all let's go ahead and let's just plot first plot a scatter plot let's plot a scatter plot for the prediction okay for the prediction so here what i'm actually going to do i'm just going to write plot dot scatter and here i'm just going to use uh you know if i'm probably plotting uh this regression thread okay so what i'm actually going to i'll just take reg underscore red comma y underscore test so now if i plot these two values or i can also change the order let's say i'm going to write y underscore test and reg underscore print okay if i plot this guys here you'll be able to see that the plotting of the values that you'll be seeing it is basically linear right so when this plotting is basically linear that basically means yes your model has actually performed well right and reg thread is basically your prediction value this is your truth values when you see that both the values are almost linear like this right that basically indicates that yes some amount of good information your model are actually done and it is predicting well okay so this is one of the prediction plot that you basically have to do to start with the another plot is that with respect to the residuals so here prediction with uh if i if i say residuals okay first of all let me just go ahead and calculate the residual so here residual is nothing but the error right the error with respect to y test and regression thread okay so if i write y underscore test minus regression production so this will basically give my residual residual basically means errors right so if i probably try to calculate this and try to execute it so here you will be able to see the difference between the y underscore test and regression underscope right now the second plot that i am actually going to do this is the plotting this residuals plotting this residuals now how do i plot this residuals here i am actually going to use plot this residuals okay here i'm specifically going to use c bond so let me use snh dot and i'm going to use something called as dist block now when i plot this disc plot with respect to residuals right let's say i'm just going to plot this residuals okay and here the kind of plot that i'm going to use is something called as kd so here you'll be able to see that i will be getting this kind of plot now here you you can obviously assume that this looks like a normal distributed but here you can see on the right hand side okay there are some kind of outliers and errors the errors are quite high and similarly on this particular graph also you can see that there are some points that are really really uh away from all these things and this looks like an outlier and obviously the difference will be quite high this this is the same graph that basically indicates our main assumption should be that when we are trying to solve we should try to get this as a normal distribution and yes from this right my my error that is my residuals ranges mostly between minus 1 minus 10 to plus 10 in a normalized format and here you will be able to not in a normalized format here you can see the maximum difference is between minus 10 to plus 10 and there are some points that are ranging between 10 to 30 okay so i still feel that yes my model is performing well because whenever i do this kind of plot with respect to the residuals i should be getting a normal distribution so this is the second assumptions that you can probably consider and you can basically find it out okay now let's go and probably assume one more scatter plot and this scatter plot is with respect to with respect to predictions and residuals okay residuals now if i probably try to plot this so let's say i'm plotting plot.scatter with respect to the regression prediction and my residuals that is this error if i plot this here you'll be able to see that my my this this different the when i'm plotting or when it's doing the scatter plot with respect to the output predicted value and the error here you will be able to see there is no such fixed way of telling this is like scattered anyhow right this is scattered anyhow it is not following any kind of distribution or it is not following any kind of format you know uh let's say that i can say that okay this can follow this kind of shape or it can follow these are evenly distributed at every side i can yeah this point will be perfect this points that you can see between the predicted and the residual are evenly distributed here and there some of the points are here some of the points are here but most of the points are scattered uniformly right yes uniformly is the word i'm extremely sorry uniformly is the word so here you can see that uniform distribution is basically there with respect to the test data and residuals super important here i'm going to write uniform distribution right so this still looks good so what i'm actually going to do over here is that i'm just going to consider this uniform distribution and continue so still this gives me an indication that yeah your model is performing well so obviously after doing such a prediction i have plotted many many different different kind of diagrams over here just to indicate that how your model is performing but to be sure to be sure we definitely have to use some kind of performance metrics now what are those performance metrics that we can definitely use first of all i'm just going to use from sklearn dot metrics and i'm going to import something called as mean squared error right i hope everybody remembers this mean squared error that was our cost function right so i'm just going to say mean squared error and then also like mean absolute error root mean squared error we'll discuss about all those things and probably if you really want to also come up with other things like from sk learn dot metrics i'm just going to import root mean absolute error so and let's let's consider this too right now okay and mse i can probably print all these things so my print statement will be having mean squared error with respect to my y sorry mean absolute error with respect to my y test and reg bread this also i'll try to print and i'll also print me squared error and here also i'm going to basically take y underscore test and reg underscope correct so these values will obviously be able to see over here there is also something called as root mean squared error now in order to apply root mean squared error so here i'll be saying np dot i think we have something called a square right square square root okay i will just apply this on my mean squad error of this specific value so here also we'll be able to get the root mean squared error so these are all my values and always remember that now these values will basically indicate how my model is basically performing now apart from this here we have definitely understood okay if i plot the residuals i'm getting a normal distribution yes with some kind of outliers with respect to the differences but most of the points that are having the differences is between minus 10 to plus 10 okay and one more performance matrix that i can definitely use uh is something called as adjusted square r square r square and adjusted square right adjusted r square so this part i will be discussing in my next video this is also another super important performance matrix that we can definitely use uh whenever we are trying to create a linear regression model okay so this is the next thing that we are going to discuss so i hope you have understood till here we have understood about the assumptions so here after the model training i am just going to make some assumptions over here and with respect to the assumptions to just understand whether your model is going to perform well or not you can basically use all these assumptions but at the end of the day see if your data points is highly scattered right one important thing you will be able to understand that obviously just with a single line right you won't be able to do that much good predictions that is a linear line so linear regression is basically creating a linear right obviously with that linear line you will not be able to just create a better model right because your data will be highly distributed and you will be having lot number of features with respect to that right but again we have created it here we have made some assumptions assumptions was basically scattering plot with respect to your y test regression prediction your residuals right and how i've also made you understand how should your initials residuals looks like when we are creating this kind of disc plot then how your scatter plot should look like with respect to the predicted models and the predicted values and the residuals and errors and similarly we can use mean square error mean absolute error and root mean squared error to find out how your model is performing well now in the next video which is super important r square and adjusted r square will try to implement it and we'll try to see what kind of score we are able to get okay so thank you yes i will see you all in the next video thank you [Music] so guys in this video we are going to check out more performance metrics and we are going to see whether our regression model that we have created is good or not okay so first of all we are going to use r square and then adjusted r square i hope i have already explained this in a theoretical understanding r square formula is nothing but 1 minus sum of square residual divided by sum of square total similar existed r square is nothing but 1 minus 1 minus r square multiplied by n minus 1 divided by n minus k minus 1 and you should always know that the adjusted r square will be less than r square always right so to begin with uh how to calculate this so we are going to import a library which is called as from sklearn.matrix i'm going to import r square r square r2 score and this will basically be my score where i'm considering the r2 score and here i'm going to give my y test comma y underscore or regression underscore predict okay and if i try to print the score okay here you'll be able to see that i'm getting 71 which is good enough with respect to the kind of problem statement that we are solving and more it is towards one the more better the score that we are actually able to get right now the next thing uh we'll also try to do with respect to existed r square now see adjusted r squared no such library is there so we will directly try to implement this particular formula so here we are just going to copy this code 1 minus score okay multiply by length of y underscore test minus 1 so this is my n value this is my n value this is my k value and this is this so just try to understand it i've already explained this and if i execute this we'll be able to see that my our adjusted r square will be less than the r square so obviously this is the property that needs to happen okay so from this also it looks good 68 percentage we are able to get right so this many number of videos i've made sure that i explained you about each and everything with respect to this uh with the regression model and how to check your uh performance metrics different different performance methods like means absolute error mean squad error root mean square r square existed r square finally we have done this so super way uh i would suggest again just go through it practice some of the things and yes you will also be able to uh do with respect to new data set now in the upcoming data sets we are going to take more complex topics we have to we have we will be doing more eda we'll we'll try to do more feature engineering feature selection and all so i hope you like this one yes i will see you in the next video or next practical session thank you bye take care [Music] so guys in this video the regression model that we have created right we are now going to take up a new data and probably predict it through our regression model and see what output we are able to get now in this kind of predictions we can either get bulk of data or we can get single single data points and we have to probably do the prediction already with respect to the bulk of data like in this particular example i have taken you know in x test data and probably we have done the prediction and we have found out the here you can see if you have a bulk data we can directly use regression dot predict x underscore test but in most of the scenarios you know we will be getting single single data points right now let's consider over here uh if i probably go again back to boston.data right so here i have multiple data points like this if i really want to pick up let's say the first data point of independent features and do the prediction for this how will i do it so here if i write of 0 so here i'm going to get my entire data points but let's go ahead and see the data point of shape right so here you can see it is 13 comma now when we made or trained our model right we have given that in a two dimension right so two dimension basically means first of all i have uh so many number of rows and so many number of features something like that so two dimensions will be there but here since i see this shape it is just one dimension so first of all i'll try to convert this into some other shape so here i will just reshape this into one comma minus one now if i see this and if i probably try to see the shape of it here you will be seeing that one row and thirteen columns so this is how i have to give my data set so if i reshape it like this and probably execute it this is how i have to give my entire data set for my model prediction so the regression that model that we have created and if i write predict dot this entire let's say i'm just going to give the first data so for this data points you will be able to see that i'm going to get an output of minus 45 right so if i probably give this all data points for my model prediction then you'll be able to see that i am actually going to get this much something related to this but why i am getting minus 45 you may be thinking chris this is just a negative value because we missed one very important step that is whenever we get our new data we have to first of all do standardization that you have probably missed it so don't ever forget about standardization if you remember standardization over here has some values here you can basically see that we have used standard scalar and this was my object name so what i'm actually going to first of all do i'm just going to write scalar dot transform of these values right so here i'm basically going to write scalar because this is what we did in the transformation right this is the transformation step so this the same thing we have to do it for our new data transformation of new data so if i write scalar scalar dot transform and now if i use this entire data set i will be getting my skilled data set right scale data set so let's see now this is basically my scale data set now i have to use the same data to basically do the prediction so i'm just going to write over here as like this and this will basically do my prediction now here you can see that i'm getting the correct value so don't ever get confused because whatever steps we did from the starting right all that transformation we also have to follow for our new data predictions also right this is super important step so i'm making a separate video for that because if you take a new data point how the predictions needs to be done so this will basically be output so now uh we will also be seeing in an end-to-end application you know we'll be creating an html page will provide all these inputs uh through the form and we'll get the output and display it in the front in the in the website or in any web application as such so this is how you actually do the prediction with respect to the new data so i hope you like this so yes this was it for my site thank you bye [Music] in this video now we are going to pickle the model file the regression model file so that we can further use this for deployment because all these projects that i'll be showing you at the end of the day i will be doing the deployment with the help of dockers and github actions so to begin with what we are going to do is that first of all we need to import pickle this is a very very important file altogether because this will actually help us to do the pickling of this regression model now in order to convert this into a pickle file i will be using something like pickle dot dump okay so this will be a dump operation altogether and what i'm actually going to do first of all here the first parameter that we basically use is an object okay now whatever model that we have actually created here i'm going to write it down as regression comma now in the next step like i have to basically say that on which file i'm basically going to pickle this or i'm going to put this entire object over there so here i'll be using an open function in python and we will be creating a regression model.pickle okay and i'm going to just give this name regression model.pkl and this should be opened in the right byte mode so this should also be given in the form of string so this basically indicates that if this file does not exist in my local folder and here i have not given any path so in which folder this file is basically running in the same folder it will create a typical file okay and it will open this in the right byte mode it will put down all the content of this object inside this and this pickle is a serialized format file so that it can be deployed in any server and probably if you want to do the prediction from a web application we can easily do it okay so here it is so i'm getting an error let's see what is in a as an integer is required got type htr okay let's see what is the problem with respect to this um okay sorry uh there's one mistake i had open close this open brackets over here so here it is now now here uh you can definitely see that where that particular pickle file has been created okay so if i go and click on open here you'll be able to see regression model.pickle file now this pickle can also be loaded you know so it can also be loaded with the pickling with this pickle uh library so let's see how it can be loaded so here am i basically going to say okay this is my pickled model and i'm basically going to use pickle dot load we can also load this specific file and it can roll load this file in the read byte mode right we should obviously load this file in the read byte mode so if i write pickle dot load and again here i can use my open function and i will basically be giving my regression model dot pixel file over here and then this should basically be in my read write mode okay so once i execute this now you remember i had basically uh done the prediction for this this value right so can we now do the prediction for this value again so pickle model dot predict if i really want to do the prediction i just paste it over here and just execute it so okay one parameter is missing let's see where it is missing i have to close this bracket perfect now here you can see that i've got the same output here also and same output over here also so in short what i have actually done is that once you create your model file you can definitely do the pickling uh you can do the pickling over there you can save it as a file in your local storage in the cloud wherever you want and this is what happens in the real world scenario once we create a pickle file let's say i'm using an aws s3 bucket so i will probably be storing over there and later on i'll be loading it and i'll be doing the prediction okay so here you can see we can also do the same thing we can create that into a pickle file uh which will basically be storing all this information in a serialized format all the information regarding that specific model and then i can again load it and again i can do the prediction and this is what we will be doing in the next videos because now i'm going to create an end-to-end project i will create a html page where i'll have all the inputs with respect to all the values over there and you know we will try to use this pickle file we'll also make sure that we'll dockerize this entire container and deploy it in the um heroku platform aws or azure let's see i will we'll go with hereku we'll also use github actions too for creating the ci cd pipeline so yes uh this was it with respect to pickling the model file now in the next video we are going to discuss about how can i convert this entire project into an end-to-end project and for that we'll be using visual studio code so yes i'll see you in the next video thank you [Music] so guys till now we have actually created a model for the boston house pricing data set and we also tested the new data and finally we pickled the model file for the deployment now we are going to move towards how we can convert this entire project into an end-to-end project itself okay so first of all now i'm going to use get okay and a lot of things we are going to see while we are covering all this project when we create the end to end project we obviously need to follow the industrial standards you know so when we are following the international standards obviously we will be using github apart from that we will be using ci cd pipelines specifically here we are going to use github actions and then we'll also be using some cloud where we can deploy this entire application we'll create some simple front-end application as we go ahead in this series so to begin with uh just go to your github profile guys and this is what is my github profile looks like here i'm just going to click on your repositories and probably i'll just create my new repository altogether so here let's go ahead and create a new repository the new repository is boston house pricing okay so this is what is my repository name i also have to need to add a readme file i also require a git ignore file because i will be doing all my coding with the help of python so some of the files that are already present with the help of python i don't have to commit in github again and again so that is the reason why i am specifically using this and you can probably pick up any kind of license let's say that i am picking up some apache license 2.0 and then finally i will go ahead and create my repository okay now this is the first step please go ahead and create your own github account along with that create a repository now here you are able to see that okay fine i can see that my entire repository is over here and whatever code i'm going to write going ahead you know i'm going to commit in this repository okay now in order to you know clone this repository because i really need to clone this repository in my local so that i'll be able to make a commit on to it okay so in order to make a clone what all things we can do is that first of all just go over here you'll be able to see local over here local as an option and here you have something called as clone okay so i'm going to click on this and i have created one folder which is called as end-to-end project and my first project will be related to boston house pricing data set okay so here uh i'll just open my command prompt so here you will be able to see i can see my command prompt and then i will go to this specific path because i want my repository over here so this will go to e again you open command prompt you open terminal uh terminal from your mac or linux and go to that specific folder where you want to clone that repository so i will go to that particular folder and then i'm just going to write git clone okay and whatever url i have actually copied over here so this entire url i'm going to paste it over here so as soon as i execute this here you can see that total the cloning has actually happened now if i probably go and see my folder so boston house pricing is basically here and you can see all your get ignore license file and readme file okay so in short what you have done is that we have cloned that entire repository now from this whatever files i will probably be creating whatever things i'll be creating i will be publishing over there right now if you remember guys already in my practicals uh you will be able to see that i have actually created one ipynb file and pickle file right so just copy this file back to your the same folder over here which you have actually created so this two files should also be over here and we will be using this pickle file in order to do the prediction okay super easy till here uh if you find that your git clone is not working just go to your browser and just download get cli okay so get cli can be downloaded for windows it can be downloaded for uh mac or linux okay so here you will be able to see just go and click on download so mac os windows linux whatever is your os just go and download it and then it will download an exe file and just continue click on next next next next and the installation will happen if and only if your git clone is not working okay so this is how probably you have done the git clone now the next thing is that we will go ahead and open or download visual studio code and this visual studio code will we will be doing our entire end to end uh implementation in this so you can go and download for windows if you don't want windows if you want for mac os or linux you can select any one of them and you can start downloading and just do the installation part installation part is very much easy with respect to visual studio code now once you have downloaded the visual studio code what you can do is that you can just open that visual studio code i have already downloaded it so over here you can see this so i will open this right now some of the projects is visible over here so let me just open the folder uh and the folder is nothing but the same location that we are going to go ahead with so i'm just going to execute this and here i'm going to select this folder now where do you want to save your workspace content don't save so now you can see all these particular files will be open and available over here right so this is my entire project here you can also see get ignore file everything and as such license is also added really linear regression model is also added regression model dot pixel file is also added okay so all the files are there you can open this and you can check it out so this is what is the code that we have actually done you know with respect to this and the same code whatever we had done right in the past everything is visible and you can definitely do the coding here also that is also not a problem and this is basically my pickle file this pickle file is in the serialist format so i'm not going to do anything for this now when we are implementing an end to end project what all things we have to do we'll start discussing from the next video but in this video you have seen that how we made sure that we cloned our repository we created our project we cloned the repository over here and in going ahead right any any commits that i really want to do i will be doing from this specific visual studio code itself and for that you have to probably install git cli okay so this is it uh let's go ahead and let's discuss about how we'll be working on the end to end project in the next video thank you [Music] so guys now i'm quite excited because yes we did a lot of eda we created our model we converted that into a pickle file but now is the main thing and if you know all the steps properly this will differentiate you from the other people you know and you can really be close to us working into uh real world industries with respect to any projects that you're working in so i'm making sure that i will follow all the best practices i'll put up my entire experience and probably create some simple projects for you so that you can actually implement it on a go okay so yes let's begin now first of all i will write down some of the steps in the readme file whatever things will be doing i will be updating all the steps over here later on we'll be using dockers hiroku and everything so all the steps i'll be updating over here so that everybody will be able to follow me okay so here uh first of all this is what is my title looks like boston house pricing data set now first thing what do you require already you know that we have done a lot of installations right so the first thing is the software and the tools requirement that is the software and tools requirement so first of all as said you have to create a github account right so here i'm just going to write down github account and the same readme file will be updated in the github repository also so that tomorrow if you're probably planning for the interviews if you give your github repository if the interviewer sees you know like okay you've written all the steps you have written all the steps to basically create the specific projects they will definitely get impressed by you so here i'm just going to put all my links of the tools and softwares that we are going to use so first is github.com definitely coming to the next one the next one that we are going to use for this particular project is vs code id so finally you will be able to see i'm just going to write vs code ide okay so here is the vs code ide that we are going to specifically use oh let's see why the coloring is not coming over here i think it will come and after this we are i'm just going to give you all the links from where you will be able to get this right so code.visualstudio.com now you may be thinking trish why are you writing in brackets and everything trust me this will work now see once i save it if i probably preview and click on open preview here you can definitely find out all the things available right so all these things i'm doing for that specific reason okay so let me just see why this is not coming okay it should be basically coming in this way vs code ide okay vs code ide okay fine let me do one more thing before this i also require a heroku account okay so here account also you have your required guys so please make sure that you create your heroku account uh okay if i'm using a space over here at that point of time it is creating a issue so vs code ide i'm going to use it over here here account you can basically go with https.com okay so this is my second thing third tool uh vs code ide okay this is my third coming to the fourth one which is super important that everybody should know is about get cli okay so this also i have spoken about it and get cli also you really need to download it and i'm just going to give you the link for the same so here is the link i have basically copied it from there okay now if i probably preview it here let me just open this and preview this okay open preview so here you will be able to see all those things vs code okay i don't have to provide any space over here some issues with respect to spacing so these are all the things and the same readme file i'll commit it in the github repository so this is the first step that you require my suggestion would be please go ahead and create your own github account go to helico account and go to vs code id download vs code id and all so just to show you how things will go ahead first of all everybody knows how to download vs code so you can download it from here you know about github account sorry get cli you can download it from here if you want to create your own github account just go over here go and sign up in this repository okay sorry sign up in github and you will be able to create your repositories also okay then coming to the next step uh which is heroku right so you also have to create a account with respect to heroku because this is the cloud platform we will be using in order to you know deploy our application and it actually provides you five application to deploy completely for free okay github there will be also something called as github action which will be discussing as we go ahead um but these are the four tools that we are going to use with respect to our implementation and everything has been updated over here with respect to this particular file so let me just quickly open this yeah so here it is with respect to all the things i have updated it now uh let me just go ahead and open the terminal so if i probably see new terminal over here okay to start with we will definitely create an environment okay so whenever you start a new project we definitely have to create our environment by default right now it is taking development environment what i am actually going to do is that i am going to create an environment from scratch and probably because every projects you have to create a new environment and start implementing thing so let's start and create the next environment from the next video itself but please make sure that you have all the accounts with respect to this and the format that you'll be seeing i'm updating over here in the readme file is basically to see in a better way okay so yes i will see you all in the next video [Music] so guys in this video we are going to create a new environment so that whatever coding or whatever end-to-end projects we are actually creating we'll do it entirely in that specific environment now to go with right uh what i'm actually going to do is that i'm just going to open my terminal so here is my terminal let me click on new terminal and here you can see that by default it is going to some specific environment called as development so let me just go and deactivate this because i don't require this environment altogether so it will be deactivated so this entire environment has got deactivated altogether right so let me clean the screen now first thing first let me just open here you'll be able to see multiple things like powershell git bash command prompt okay now everything like suppose if you are working in linux or mac you can definitely use your own terminal which will be given over here but with respect to windows here you'll be able to see multiple options so that you can also run commands that are related to linux or windows itself okay so what i'm actually going to do first of all let's go ahead and take one command prompt okay so this is my command prompt right now we are into the base environment i'm just going to clean the screen okay and here you can see this is the path of my environment now the first step after usually what we do is that we create our new environment so here create a new environment for the project and it is always a better step that whenever we create a new project we have to create a new environment so to create a new environment the command that we are specifically going to use is something called as and obviously if you are using anaconda i think this is the command that you should use so conda create minus p e v e and v that is the command that is the environment name and then python that i'm actually going to specifically use is something called as 3.7 because i feel with the help of 3.7 will be able to do all those things okay and this minus y basically says that because see if i don't use this minus y and if i probably execute this much it will basically again ask me a confirmation whether i should go ahead with the installation or not so instead of again clicking over there is why uh what i am doing is i am giving minus y is over here itself so that it takes that yes prompt from here itself okay so this is what we are going to do and now i'm just going to execute this command over here and this in short will basically create my environment uh that is over here so to begin with what i'm actually going to write conda create minus p b e and b v e and b so this is my command prompt name sorry environment name not command from him 3.7 minus y so once i do this here you will be able to see my installation will automatically happen now here you can see that it will ask for one confirmation here since i've given minus y so all the installation will take place okay so here you can see that v e and v environment is getting created right so here itself you'll be able to do all the specific work okay so this will probably take some time uh but here we are actually uh using this we are doing this entire we are creating our new environment and whatever project we basically right over here it will be created in this specific environment also okay now the next thing what we have to do is that activate the specific environment now in order to activate the specific environment here you can see that uh you can just write this specific command or what you you can also do one one one thing is that you just write something like this conda activate b e and b and since vnb is over here itself so i'm just going to go inside this folder so here you can see that now this particular environment has got activated so let me make my face smaller so here you can basically see this environment has got activated altogether okay so finally we are inside this environment okay now whatever files we create whatever things we do we will basically be working within this particular environment and we'll be committing our code with respect to this okay so uh again if you open any file like this here also you can set it directly so vnb this one is there you can see this um this is the project that we are going to do so i'm just going to set this over here and save it okay so like this also you can basically set it and you can execute it uh by your wish you know independently you can basically execute it so this is how we basically create this so create a new environment is the first thing so i've written the command over here you can also see it by a preview option okay so in the preview option you'll be able to see that this is my second step after this creating a new environment and here you will be able to find it out all the information okay so now in the next step what we are going to do is that we are going to start writing our code okay uh and over here we are going to make sure that once we create this readme file or once once we create this environment you can start writing our code now one important thing that i really want to do is that first of all and probably this is a small step so let me include in this video is to create a requirement.txt file so requirement dot requirements.txt file i will be creating over here and in this whatever libraries you require you know like sqlearn or any kind of libraries that you require you can basically write it over here like if i take an example like when i'm creating this end-to-end projects i will be using a library which is called as flask so flask will be the library that i'm actually going to use and uh what we are going to do with respect to flask also we will be installing this one more library is skln since we'll be using machine learning suppose if you want pandas you can use this you want numpy you can use this matladlib you can use this right all these libraries will specifically be required so i'm going to import this flask will be capital f so this all requirements now if probably i want to install this library all i have to do is that write down a command over here as python uh python uh sorry not python it says like pip install okay minus r requirements.txt so this will be requirements.txt so once you install this you will be able to see that the installation will take place so all the installation will be taking place from here itself and within this environment you will be able to see those in those library will also get installed since i am inside my um vnb environment okay so here you can see that all the installation is basically taking place okay so any libraries that you may probably be requiring in the upcoming future you just need to write it over here and we'll do it with the help of pip install okay now this is perfect uh so this will take some amount of time but anyhow the installation will take place uh let's consider that the installation is taking place i'm going to pause the video now and the next video we are going to learn about some more things that we really need to do the setup as okay so yes i'll see you all in the next video [Music] so guys now you have seen that you have created your environment uh you have also created requirement.txt file so any type anytime you know probably you want to do some kind of installations with respect to any packages you just have to write pip install minus our requirement.txt right now till here everything is fine now one thing that every code that you write over here we really need to push this code to our repository to our github repository which which we have actually created initially now in order to do that we have to configure our git cli with our github repository so that with the help of commands we'll be able to push all these files so how do we go ahead and do that so first of all we really need to set up some configuration you have created your github account with some email id right so we have to set that email id in our git cli in order to set it up first of all just go and write git config and when you write this git config global user dot name so two things you have to set up so let me just again clear it okay so one is get uh config minus minus global so we are going to set up the entire global user dot name so if i execute it i have already configured it so here you will be able to see that i'll be getting krishna but if you want to configure it in a new type so just go ahead and write your name whatever user dot name you want to write over here so in this case i can basically write krishna right krishna so once i execute this you will be able to see that it will get executed successfully now anytime i will try to just call this user dot name you'll be able to see that i'm getting krishna this is the one configuration that you have to do the second configuration that you really need to do is user.email now user.email here you should basically assign the same email id through which you have actually created your github account so in my case it is krishna x06 at the gmail.com right so here this is my github account email id so what i will do is that i will set up my user.email to this now why i am specifically doing this because later on when you commit your code or when you commit your entire repository into the github account automatically it will ask you to give the password for this email id at that point of time you can just type down the password for the first time and automatically it will get saved okay so again i am not going to execute this because i have already done it but remember the command get config minus minus global username and then user.email okay this is perfectly fine now i'm just going to clear the screen once again now you can see over here that i have created so many files i've created requirement.tst regression model i i want to commit all this file and in git ignore you also have to make sure that whichever folder or files you don't want you can actually give it over here so that that files will not get committed into the repository so already we have done that and by default some of the files it has already written if there is any specific file you don't want to take over there you can basically write it down so it is going to ignore that specific file perfect now what i am going to do whatever code i have written this files ipv file pkl file and all i need to commit it in the repository so let's go ahead and how to do it the first step uh that is super important to start with is that i need to uh whenever we basically write this so whenever we go ahead in committing any code there are some series of steps that we need to follow obviously you can go ahead and have a look on to the documentation in get cli which is which definitely will be easy but most of the time what steps we definitely follow that is what i'm actually going to show you a detailed implementation part you will be able to understand through this okay so to start with let's say i want to add any file okay add any file that i really want to put it into a repository so here i will basically say git add let's say i want to give my requirement.txt or readme file or any file as such so git add requirement dot txt when i press enter here you will be able to see that this file will get added so if i probably just update git status so here you will be able to see that okay modified this readme file you can see this new file is requirement.txt right all the information regarding this file will be given the untracked file is this right but here you can see that new file is requirement.txt which we are going to push it into github repository and that is the file that i have added over here by using this get add requirement.txt but there are still some more files right readme.md this is modified because readme file was already created in my github repository and there are two more files that are left now if you want to use just one file that is fine but if you want to add all the files so that we push up in the github repository i can basically use git add dot when i say dot it is going to basically add up all the files okay now uh now if i probably see get status you will be able to see that all these files are basically bit added okay this is the ipynb file there were some warnings with respect to the name that's fine that is not a big issue okay now i'll clear the screen so guys now we have added all the files that needs to be committed to the repository uh so the first step obviously we have added it then the second step we really need to do something called as commit commit what it does is that from our local it basically pushes it to the staging environment okay so here let me just open our browser and just let's search for get commit okay and always make sure that you follow this at last and get tutorial document it is an amazing document altogether so here you will be quickly able to see that okay you have commands like git commit commits the stage stamps not this will launch a test editor prompting you for a commit a message okay uh so i will probably not use this because i also want to give the commit message directly into the command git commit minus a commit a snapshot of all the changes in the working directory this is also fine we will try to use this particular command git commit minus m commit message okay so here you can see a shortcut command that immediately creates a commit with the past commit message by default the git commit will open of the locally configured text editor and prompt for a commit a message to be entered passing the minus m option will for you the text editor now this is super super important guys so what i will do is over here is that i will just go and open our vs code and here you know that through git status we are able to see that what all file needs to be pushed to our repository this all files are there now i'm just going to add git commit and the same command i'm going to use which is called a minus m so here you can you could definitely see what is the command minus m and along with that we will be using something called as commit message so here uh what i'm actually going to do apart from this i'm just going to write my commit message and i'm going to say this is my first commit and this commit includes or i'll give up very meaningful things this commit includes requirement dot txt and readme file okay so i've written a commit message over here so that anybody will be able to understand it and i'll press enter so here you'll be able to see that it is basically asking for me a very important information first of all please tell me who you are see for the first time i told you right we have to set up this global user.email now you can see over here you can see all the information is given now i will go ahead and write this all the information and it is also saying that unable to auto detect email address this is also perfectly fine so what i'm actually going to do i'll just copy this and i will go ahead and write git config and this you have to do it for the first time okay after that you don't have to do then i will say global user dot email okay user dot email and here i'm going to set my email to krishna x06 at the rate gmail.com okay so once i execute this here you will be able to see that is set now similarly i want to do it for my user dot name user dot name i will be using krishna and press execute enter okay now this is done now i'm again going to use this particular message get commit minus m this commit includes requirement dot txt and read reply so once i execute this now you can see that the four files has been changed 3167 insertions one deletion all this information has been pushed uh from um and obviously you can check out this information we have shortcut commands that immediately creates a commit with the past commit message by default is this it basically creates a commits a snapshot of all the changes in the working directory okay this is what we are basically doing and again guys uh since this is not a dedicated git so i'll definitely it'll be a very big course if i go and probably explain about each and everything and get but most of the time what does a developer do that i'm actually trying to cover up okay so here it is now the final thing that we have to do is basically get push now similarly you can go and search for git push from the documentation so here you can see atlantian git push so here you can see get push remote and branch okay so these are the two information that i have to give remote and branch now in order to give this remote and branch first of all you need to understand what this remote and branch is all about okay so first of all if you go over here here you will be able to see this is my main branch so i have to definitely push over here in the main branch right now still the code is not nowhere over here right it is still in that specific staging environment i need to push it from here we have already done the commit and it is ready to be pushed to this particular branch now if i really want to push to the main branch now i will go ahead and write this specific command over here okay let me first of all clear the screen now so i will write get push okay right now whatever commit we have made okay let's say it is in a specific environment that environment uh will be something called as origin and now from the origin i'm going to push it to the main branch okay so once i write this you'll be able to see that automatically all the push will happen but before that it will also ask you for this kind of option okay and you just need to sign it with your browser to give the access to the push so here you will be able to see i'm clicking on chrome automatically this thing will happen and i'll be getting a successful message let's say and over here you can see that the total push commit has basically happened right so uh this is how the committee basically happens it just asks for this kind of confirmation now if i go back to my repository and if i reload it you will be able to see all the files are updated along with that you will also be seeing my readme file so this is perfect till now we have done the commit and in the commit again remember there are four important steps one is we add okay so again you just go and search for what does get add do so git add command okay these three commands we will specifically using a lot so what is the get add documentation it will basically add it so here you can basically see all the information regarding get add this get our command will not adding node files by default if any ignore files are explicitly this this is there after that we will be doing the commit it basically creates a snapshot of all the changes in the working directory it commits the staged snapshot that needs to be pushed to the repository and then finally get push to which repository we are pushing remember that once we do this git commit right it is basically going to take from the stage environment and put it in something like origin right so that is the reason we wrote get push origin to main okay and finally when we do get push here you will be able to see that all the information has been pushed over here so finally you will be able to see successful github repository which have all these files now in my next step what i'm actually going to do is that i want to create an app.py file i will create a front-end application i'll use this pickle file for the you know prediction purpose and then i'll be able to get the output so that is what we are going to do in the upcoming videos so i hope you like this particular session this was it for my site i'll see you in the next video [Music] so guys so many steps we have configured all the tools we have made sure that we can now commit to the github repository everything is perfect now it's the time that we build our end-to-end application uh this are all the good practices that we follow so let's first of all create amazing app dot py file so that it will help us to build an application wherein we can take an inputs and based on that my for my pickle file i'll be able to get the output so to begin with what i am actually going to do quickly and all the all the code i will try to show you by writing here itself so i'm first of all going to create an app.py5 and once we create this entire thing we'll also do the deployment in helico platform so let's go ahead first of all let's go and quickly import pickle file now pickle file is important because we require pickle file to unpickle this or load this particular sorry not unpickled but load this regression model typical file right then i'm going to import from flask i'm going to import flask because here we are going to use a flask library and flask is basically used to create web applications lightweight web applications so definitely this will be more than sufficient along with flats i'm also going to use request and then i'm also going to use app let's say app comma jsonify i'll talk about all these things why it is specifically used as we go ahead and then i'll also use something called as url underscore for and render underscore template so all these things we have will be using right now till render underscore template will be using all these things okay now these are some of the applications we have already imported it now what we are going to do is that quickly and first of all let me just type this and make me let me make the screen a little bit bigger so this is imported uh the next step is that i'll also be importing numpy because i require numpy and import pandas okay please make sure that you have all these libraries written in your requirement.txt file otherwise you will be because initially we will import all will install all these libraries in our new environment and then we can go ahead and probably import it away and start coding it now the first step is that i will try to create my fla app and here you will be able to see i'm using my flask which will be underscore underscore name okay so here uh some amount of flask information is also required and here you can actually see that i've actually created a basic flask app uh and here underscore underscore name is nothing but this will be the starting point of my application from where it will run okay so here we are defining a flask app and then the first step will be that i will try to load my pickle file so pickle dot load i know what is my pickle file name so i'm going to write open function over here obviously you know what is your pickle file name so if i go over here you'll be able to see reg model dot pickle is your pickle file name so here i'm going to basically write reg model dot pickle file so in short we are basically opening this pickle file in the read byte mode and then we are loading this right this is how we are basically done so this entire pickle will be in this particular model pickle so let me name this model pickle as reg model that is a regression model so here fine we have loaded the model let me write down the comments load the model the same thing that we did in our jupyter notebook if you remember right in jupyter notebook also we have loaded in this specific way now i will in flask what we do is that we basically create our app.route so here i'm going to basically write app.route and this app.route will have the first route right the first route you can just say that this like the localhost the url and slash if i basically say i should definitely go to my home page so here i'm going to create my home page as my definition and this will probably return return a html page and let me write this html page as render underscore template and here i'm basically going to write home.html okay so this will basically be my render underscore template and home.html so home.html i have not yet defined but just consider that this will basically be my html page where i can probably say welcome everyone something like this so by default once i hit this flask app it is just going to redirect to the home.html page okay and that is how we basically go with respect to flask okay and then we have app.route now we are going to make sure that we create a predict api so for creating predict api what i will do is that i'm just going to create an api wherein using postman or any other tool you know we can send a request to our app and then we can get get the output so for this i will be writing slash predict slash predict underscore api so this will be my api uh that i'm going to use apart from this i'll also be using something called as methods is equal to post right so this will obviously be a post request because here from my side i am going to give some input and that will actually capture the input then we will go and uh basically give that input to our model and our model will give the output right so here i am going to basically write the definition as predict underscore api and this will actually help us to do the prediction with respect to this okay now quickly let's see what code we will be writing over here and you may be thinking krish how you are getting all this data all this code automatically over here guys i use a an amazing extension and visual studio code that is called as github copilot so github copilot uh you know as a as an influencer as an open source contributor github has provided me for free and you can also request for it uh you know from the github itself so first of all what i'm actually going to do is that i'm going to write data is equal to request dot json and here i'm basically going to use data over here so what this basically indicates that uh whenever i hit this predict underscore api you know then what will happen is that the input that i'm going to give i'm going to make sure that i give it in the json format which will be captured inside the data key okay so from here as soon as i hit this api as a post request with this information whatever information is present inside this data we are going to capture it using request dot json and then this will get stored in this data variable that is what it basically does okay and i can also print this data so that you will be able to see that same information coming over here so now once we get the data the first step you remember in our linear regression what we did is that we first of all did standardization right so if i probably open my linear regression ml implementation file so here if i go down you will be able to see that uh we have performed something called a standardization let me just zoom out a little bit so here if you probably see after the train test split we did something called as standardization right now during the standardization uh we forgot to basically you know uh import or transform this particular standardization into a pickle file because the same pickle file we will be using here also in our app.py already we created a pickle file for the model but not for the standardization so what we are going to do quickly is that we will execute this entire file again okay and quickly from till that particular line will create a pickle file so what i'm actually going to do over here is that quickly let's go ahead and execute it so here quickly i'm just executing it without thinking anything so that i will be able to get the pickle file so let's go ahead and let's do that so here it is here it is here it is here it is so train test split is done and then finally we go to the standardization and we do the scalar.fit transform on x test uh extraneous test now this is the code that i did not write at that point of time and probably i wanted to convert into a pickle file itself so finally i am going to execute import pickle pickle dot dot scalar open scaling dot pickle with the right byte mode so once we do this you'll be able to see now in my local folder there will be a scaling dot pickle file perfect and remaining things i don't want to do it because i already have the regression model dot file okay now what i am actually going to do is that now we are going to take this particular data first of all get transformed into my pickle file uh basically my scaling dot pixel will transform that or trans standardize that entire data and then we'll finally do the prediction now before we go ahead uh first thing is that once we get the data this data when we are passing in when we're getting it from json right it will be basically in the key value pairs so if i probably write print data dot values right if i do data dot values i am going to get the dictionary values over there whatever values it said but this will not be sufficient right i have to first of all convert this into list so if i get these values and convert this into list so i'll be getting a single list over there then after this you know that uh with respect to the pickling that we did in the regression model also right we have to do the reshape for that right now in order to do the reshape after converting this into n list i'll also convert this into np dot array and this array will then i will be doing the dot reshape and dot reshape will be done with respect to 1 comma minus 1 the reason why i'm doing 1 comma minus 1 because i'm just saying that okay this is a single data point record that you are going to get and now you have to probably take this up and do the transformation otherwise this transformation whatever standardization we are using it is always expecting a single record with so many number of features based on our data set right so once i probably print this i will be able to get it now the same thing what i will do is that i will pass it through my scaling model but before that i am going to import this right so here i have written a code scalar pickle dot load open scaling dot pickle in the read byte mode so once i have this scalar all i have to do is that i will convert my new transform data is equal to scalar sorry it is callable scalar dot right transform function i hope everybody remembers and then i am going to pass this entire thing like np dot array this this reshape to 1 comma minus 1 so here you can completely see that i'm getting that same thing right data dot values reshape 1 comma minus 1. so once i do this i will get a new data that will be transformed and finally you will be able to see that if i apply my regression model so this regression model reg model dot if i do the prediction with respect to this new underscore data i will be getting my output output and finally here you will be able to see that i will just print you can also print the output if you want so that you can also see over here so output this will be in a two dimensional array so i'll be taking the first value and i will return from here by using json e5 with respect to output is equal to output of zero right since this is a two dimensional array that is what we got with respect to the regression model so right final output what did we get you could see over here it is a two-dimensional array so i have to write array of zero then i'll be able to get that specific value right so same thing i've done over here and created my predict underscore api now finally here you will be able to see in order to run this i will basically write f underscore underscore name underscore underscore is double equal to underscore underscore main underscore underscore here you will be able to see that i will be writing app.run and this will basically be in the debug mode i'm just saying that debug is equal to true so that i can be able to debug it so perfect this is the entire code that we are going to run remember this underscore predict underscore api here we are just using this as an api itself right so uh that api part will be doing with the postman which i will probably show you in the next video one more thing that i did is i missed out is this render template whenever we go to the home page right it should redirect to us to the home.html and this render underscore template in flask will look at a template folder and right now i don't have any template folder so i will go ahead and create a templates folder and inside this i will create a file which is like home.html right so here let me just for the testing purpose i will write h1 over here and let's say i'm going to basically call something like hello okay hello world just to see that something is coming in the home page later on what we'll do is that in the same flask app i'll try to create an html page over here and then from here we'll try to transfer the data and get the output but currently what we did is that in the app.py we created an api uh api in short predict underscore api and this api i'll try to call it okay now this is the entire coding with respect to this now what i'm actually going to do is that in my next video first of all i will download postman and then i will run this file and then we'll try to do the prediction with postman okay so and this data json needs also to be created okay that i'll show you in my next video so i hope you have understood it if not please keep on practicing this is the code that we really need to write a basic amount of flask idea you really need to have okay so i'll see y'all in the next video thank you [Music] so finally guys we have done all the coding and again all the step-by-step process by using git and all we have seen we have coded the entire web application and now it's time to basically run let's see whether we are going to get any kind of errors if you're able to get any kind of errors then again we'll try to fix it but if we are lucky enough then it will start running it okay so let's go ahead and let's open the terminal so here is my new terminal so i will just go to my previous terminal itself so by default i got powershell i'm just going to go to my command prompt it is still inside my venv environment so let's go and run python app dot py so once i run this you'll be seeing that you have to run in this way write python app.py and this is your ip which you'll be able to see 127.0.0 500 right so i will click on this and here you can see hello world you are able to get perfect uh now the thing is that i had created an api right over here so here you can see that i had created an api and the api name was nothing but slash predict underscore api right so what happens if i just execute this slash predictor let's go api obviously it is not going to run because method is not allowed method is not allowed for the requested url why because the method is post so we have to give some information some post from the client head some some information what is the information that data it is looking at so for doing this one one thing we are going to use is something called as postman so please go ahead and install postman so you can go ahead and click on pause and download for windows so once you go and click over here here you will be able to see something like postman windows bit 64-bit you can download it and you can install it and again installation is very simple after you download this just click on this you will be getting an exe file okay and you can start the installation so here you can see it is 143 mb file which is getting downloaded and you can start the installation just by clicking clicking next next next next next now once you install it here you'll be able to see something like this okay uh i have created a lot many but it's okay not a problem so what we can do is that just go and click on the first one like this and first of all what request you are doing it is a post request what is the url that you have to write you have to basically go with 127.0.0.1 colon 5000 slash predict api so this is what you are getting from your local address right then uh let's compare with respect to the body part right in the body part the type of data that you are going to give is specifically json data right so here you have different different things like form data you have you have raw you have this you have binary now what i'll do i'll go and click on raw and here i will convert this data into json right now how should i give my json data my json data which should be with some key value pairs right so first of all i will go over here i made one json data over here and then i will just try to paste it over here now see this now this is what i'm giving as data as my main key and inside i'm going to give all my features values okay so this is all features i'm actually going to give it to my model like crim zn and i will initialize some values okay here any values you can put okay in this cache nox rm like this and make sure that this is a valid json how do i check whether this is a valid json or not just go over here click on json json validator here you'll be able to see the first link that is json link and just go and do this and just click on validate json so here you will be able to see this is absolutely a valid validated json and it is quite accurate okay so i'm going to go again back to the json prepare my data like this you can put up any values over here and here you can see that this is the post this is my api i'm just going to go and hit on send once i hit on send i should be getting my response as an output okay so once i click on send now here you can see i am getting my output that is 30.08 so this is the output that i probably got from my predict underscore api so as soon as i hit this predict endoscopy api it went over here it picked up all the data then it converted this into an array right it converted this into an array it did a reshape of 1 minus 1 comma minus 1 it transformed it with using scalar dot transform and then it finally predicted it and it jsonified this output and it gave this output in my postman so like this you can probably try with different different values let's say if i change this value to 20 31 and if i just send it here you'll be able to see that the value has changed similarly if i make this as let's say 1.31 if i go and send it here you will be able to see 30.5 if i probably say the crimp value is somewhere around uh i'll make it a larger value so let's say 7832 if i send it here you can see some value is changing the price of the house is basically changing and this is how you basically check with the help of postman super fine because your project is absolutely working and it is working in an amazing way right so this was the thing that we basically did now one final thing that we are going to do over here is that make sure that we commit everything into our repository right so if i really want to cancel this running so i will just write ctrl z okay just a second so here i will just click and ctrl z or ctrl c just make it as control c and this will get closed so now clear the screen now all i have to do is that add all this code right over here so first of all again remember what we did we did get add dot okay and then probably we can see that what is the get status so get status here you can see all these files has been modified it's fine let's go ahead and commit it right so next step is that we write git commit minus m if you remember all the steps here we are going to basically write this is the commit this is the submit for web application right web application and then i am basically committing it so here you can see all the commit has basically happened and final step is basically to get push from origin to the main branch right so once we push it here you'll be able to see all the push is basically happening perfect it has done completed now i'll go to my browser let's see whether it has been updated in my github file or not now here you can see that i'll just go and reload it okay so here all the files has been updated i'll also make sure that i'll update all the readme file what all steps we did you'll be able to find out in the repository if you really want to see all the commits that has happened just go and click over here you can see three commits are there this is the commit for web application this was the recent one so here you can basically check out browser repository at this point of time so this is the repository that it looks like you can also make sure that you can click what are the commit details over here what all filers got added what all code has got added everything has been added over here you will be able to find out all the information right so this is how you also check with respect to all the commits that you have done okay now this is perfectly fine till here now you will think of deploying this into a cloud once you deploy this into a cloud then we'll be dockerizing it that will be using github action so there are many things that we are going to learn as we go ahead be with me guys because one project if you are able to do properly all the projects you will be able to do it okay so thank you this was it for my side i'll see you in the next video in the next video we'll try to see how we can actually deploy it thank you [Music] so guys in our previous video we have actually created an api which is like predict underscore api which was a post request and we were collecting the data using postman and based on that we are giving some kind of output after we are passing it through scalar transformation that is standardization and apa from that we also passing it through the regression model which is basically giving me the output but instead of just creating in the form of api why not just create a small web application wherein we just provide the inputs we submit the form as soon as we submit the form we we basically take the data over here and do the prediction with the help of uh the model that we specifically have so let's go ahead and let's try to create this so what i'm actually going to do is that i'm going to first of all say app route okay so app dot route and here i'm going to create my new function that is flash predict basically a new method altogether and this should be actually a post method and then i'll define my definition like definition predict and this is obviously a post method uh now see guys what i will be doing is that let's say that we will be having an html page and from that html fields what we'll do is that we will try to give up all the values all the input values from a specific form so we'll create a form which will probably require uh to give inputs from the user and those inputs will basically be all these inputs that are actually required uh if i say if i probably go and see all the inputs over here here you will be able to see the target column like secret say crim zn indus cache nox rmh this that so we will be giving all these values through our html form and from that particular form we will try to get the data over here as app dot py again from that form it will be a post request okay so let's go ahead and let's try to see how to do it okay so first of all i will just go ahead and get my data and for this i'll be using one important thing that is called as a request obviously you know that we have imported something called as request over here so we will be writing request dot form dot values so when i specifically write like this so whatever values we are filling in that form will be able to capture it because all the information will be present in this request object very simple till now and then i will try to convert this into float value so float value y because all those values needs to be given as a float with respect to the model if we give integer that is also fine not a problem at all but it is good that we are actually making it float so what i am doing is that i am running a for loop for every x for every values inside this request.form we are just going to convert that into float and finally we get in the form of list format okay then what we do is that we basically create our final input as this final input will be will be using scalar dot transform again because we really need to do some kind of transformation so scalar dot transform and here i'm specifically going to use np dot array okay so here it is basically like np dot array and i have to make sure that whatever values i get in this day data i have to reshape it okay so dot reshape with one comma minus one so all these values are there with respect to my final input and then we will go ahead and print our final underscore input okay and this will be my final underscore input just to see that how my model is basically coming and then finally i'll be using my regression model dot predict to predict this specific final underscore input and we know that we will be getting an array uh which will be of two dimensions so i am going to just take out my first value which will basically give me the output so here is my output and finally i will be returning and i will use something called as render template this is super important in flask where i will say that okay fine i'm going to render a specific html over there i will redirect it to let's say home.html and in that home.html i will be creeping uh keeping a placeholder which is like prediction text so let's say i will be keeping a placeholder which will be like a prediction text and here i'm basically writing the predicted house price is okay i'm giving a message or i can basically say the house price prediction is the specific value okay this specific value that is format dot output okay so what it is going to do is that when after we get the output what is going to happen we are going to render the home.html and it is going to replace this placeholder so there will be some kind of placeholder in that html page and we are basically going to do the house price prediction is some output and i'm just going to replace this particular value with the output that i'm getting very much simple and this is the entire predict function now what i will do is that one super important thing let me save this you will be seeing that initially i created this home.html now i have made some changes in home.html now guys it is not important how to design the home.html but here what i have done is that in a short way i've created a form okay and this will basically go and hit this predict function that i've actually created in this slash predict okay and here are all my input text so this all input text i'll try to fill and these are all my input features okay not that important see html is not that important for a data sign if you know that is well and good basic html is more than sufficient to just create this uh front-end application right and then finally i have my predict button right so inside this form when i click on this predict button it will go to this particular url that is predict and in short it will basically call this slash predict and then entire this function will be called after that you will be seeing that where does this output get displayed so if you go beside right there will be something called as a placeholder which is in double flower brackets so the same prediction underscore text i'm actually using as my placeholder okay so let's go ahead and let's try to see whether this will run or not finger crossed and let's see if even though we get any kind of problem right always make sure that whenever you are in the command prompt you have to be in your venv environment and then only you try to run it so guys in order to run again i will go ahead and have a look on python app.ty and once i execute it you can see okay some error okay i did not save this file let's see now it should be working fine python app.py now here you'll be able to see the url is done so i will just go and click this url let's see how my url will look like and if i open it here you'll be able to see boston house pricing prediction and these are all my features with predict a simple html just to practice some of the things that is the reason i've created this so here are my values so i will just place it at any values you can select it guys it's up to you i have just tested with some of the values before so i'm just going to place some values it is up to you whatever values you want you can actually place it okay and then finally you can see that once i click on this predict button right so once i click on this predict button you will be able to see uh this home.html when we are specifically clicking on this predict button it is going to go ahead and hit this project url in app.py so here is my predict url in app.py here we are going to basically take up all the values that we are going to get from that form and then here i am going to convert this into an array with this shape and then we are finally transforming it and then finally we are predicting it and getting the output and we are displaying the output in the same home.html with this new text okay that is what we are doing so let's go ahead and quickly let's go ahead and click predict and let's see whether we get the output and something has happened if you go down here you will be able to see the house prediction price is somewhere around house price prediction is 13.92 superb so here you have basically created a simple front-end application and with the help of just click on predict you are able to interact with the model and you're able to get the output so guys here you could see that i'm getting a good predictions over here again i just put up some random value you can test with other different different values and see what the prediction is now when we know that okay my application is working fine what i'm actually going to do now is that i will press ctrl c come out of this clear screen and i'm just going to get do the commit over here so i'll write again git dot add and here you will be able to see if i probably see the get status you will be able to see some updated so i have added a modified this many files perfect not a problem and then i will just add git commit minus m and i'm going to give the message and here you can see the web app is ready okay and finally just execute this two file changes and then finally i will just do git push along with that from origin to main so here it is the entire git push is basically happening and we are properly fine right so everything is done just validate it from the github if you want so here you can see that if i reload it and here you'll be able to see four completes right so the fourth commit will basically be this commit the web app is ready you can again see the differences view the commit details all the information that is added everything you'll be able to see okay so you can validate this now in my next video i'm going to basically deploy this entirely into a cloud which is called as hiraco first of all we'll just see a basic way and then we will try to see with the help of dockers along with that we'll also try to see with uh github github actions which is basically a kind of ci cd pipeline so we'll try to see a bit of mlaps when we are doing this kind of deployment so yes uh this was it i'll see you all in the next video thank [Music] you so guys till now we have actually seen completely end-to-end application with respect to this boston house pricing data set and we also saw this by running it in locally by using this both api along with the entire web application it was working perfectly fine now it is time that we really need to deploy this application into the cloud and the cloud that we are going to consider for this uh project is here aku okay now here is nothing but it is like a kind of platform as a service all you have to do is that make sure that entire configuration is right and just try to deploy it automatically it will actually create or will give you an instance in the cloud itself and it will run the entire code over there to start with what we are going to do is that if you are planning to deploy in the um you know in the hereko platform we first of all need to create a proc file now you will you you may be confused like what exactly is proc file proc file is nothing but uh usually heroku apps you know whenever you probably deploy any apps in the hiraku a profile specifies some commands that needs to be executed by the app as soon as it starts okay so that is the reason why we specify a proc file so that let's say i'm just going to specify over here profile and the one thing about visual studio code is that it automatically understands the extension okay now why i am putting this proc file because uh it is basically indicating what is giving a kind of command to the heroku instance itself that on start what on on startup right so basically when this application is started what exactly what commands needs to run okay and the commands that we are going to use over here is something uh related to uh the unicorn that is the green unicorn and you know why g unicorn is basically used right g unicorn is a purest python http server for wsgi applications it allows you to run python applications concurrently by running multiple processes okay so that is the reason why we really need to write g unicorn we need to use in this profile okay so uh how do how do i write this specific command over here so you just have to write web colon okay g unicorn app dot app colon app okay now why i'm specifically writing this all things okay let's let's go ahead now g unicorn as i said that it allows the entire python application uh to run in an amazing way because it creates multiple python processes so based on the number of requests whenever the client is putting up probably let's see thousand of users are uh hitting this particular website whatever website whatever web application we are creating it make sure that you know it distributes the entire request through multiple instances and many more things right uh that is the importance of g unicorn in short right so over here when we call this g unicorn app is nothing but we are basically calling this app.py file and inside this you know that our app name is again our application name is something called as app that is the reason why we have written like this app colon app okay so this is the one of the changes uh that we have to do the other changes is that in requirement.txt since we are using in the profile view unicorn here also we have to make sure that we upload this g unicon so that the installation will take place uh as soon as this requirement.txt file will run okay and this much files like see when we are giving this requirement.txt when we are giving this proc file you know automatically the heroku instance will understand okay understand that okay if requirement.txt is there we have to make sure that we have to install all the libraries over here okay now so these are all the things uh the final conclusions the final updates we have to make sure that we have one proc file we have one requirement.txt now we need to upload all these things into our github right and from github what we are going to do is that we are going to deploy that in our heroku cloud okay so uh committing into the github i can give you this as an assignment uh please make sure that you write some get commands that is get add dot and then you try to commit this and push it into the github repository so this will be an assignment for you please do it so guys in the next video what i'm actually going to do now is that uh in the next video we are going to do the deployment okay so i hope everybody is should be able to do this configuration setup with respect to creating the profile and updating the requirement.txt and finally committing it into the github repository okay so yes i will see you in the next video [Music] so finally guys all the file has been committed into the github now all we need to do is that do this entire deployment in the here now there are multiple ways of doing the deployment i will first of all show you the most easiest way then in the upcoming videos we'll be seeing with the help of dockers and ci cd pipelines but this time just a very simple deployment i have created the proc file i have all the requirement.txt you can basically see in the requirement.txt i have g unicorn everything is available okay now first thing just go ahead and login into your hereku dashboard so once you log in let's say that i have logged in over here in here dashboard you can actually create five different applications for free so we will try to create over here and the most easiest way we'll try to create uh will not uh probably use any command line interface to deploy or do anything just drag and drop everything clicks that things will basically be doing now first of all i will go and click on new create new app now here let me just go ahead and write boston house price one okay so this will uh okay just saying that not available okay what a problem austin boston house pricing one so this is basically available i'm just going to go ahead and create the app okay now once you create the app here the deployment methods that will be shown will be three one is here get one is github connect to git github and one is containery registry using heroku cli right now i will not be doing this heroku cli or container registry cli let's go ahead and do it with the github okay so right now just go and click on github and then it will probably tell you to connect it and validate it to your github link so here you'll be able to validate it now go and search for the repo that you want to connect i have already validated it so if i probably search i'm actually getting boston house pricing okay so now i will just go and click on connect so once this is connect that basically means this is this entire repository is connected with my deployment platform and remember here is a platform as a service okay all you have to do is that you have to make sure that you give this entire files from the github automatically it will do the installation will see the requirement.txt and everything now over here you can see we can also enable this automatic deploys you know if i enable it that basically means anytime any changes that i make over here automatically the deployment will happen in this boston in this uh helicopter platform itself but i'm not going to enable this because i will just do the deployment whenever i will like okay but usually whenever we create a cicd pipelines we usually make sure that this automatic deployment will be there with respect to various platforms now i will go and do the manual deploy so this entire github branch that is the main branch because i just have one branch over here that is my main branch this entire branch will be deployed into my um heroku platform my heroku cloud so i will go and click on deploy branch now here you will be able to see everything all the process will basically start okay so here you can see building heroku 20 stack installing python 3.10 now see we did not give any python version requirement.txt so it did not get any version so it has started installing this otherwise if you give python 3.7 it will install python 3.7 then you can see flask is there always remember if we have a higher version the older version may also work okay unless and until it is not a message switch from python 2 to python 3 okay so here you can see pandas matplotlib g unicorn dependency of g unicorn everything is updated or downloaded sky kit learn is also there which is the recent version everything is downloaded okay now this is the thing because uh in requirement.txt we have to also upload some of the packages but i think it will this will also work okay but again if you have a larger application you may find some of the dependencies and problems over there in order to fix the dependencies we'll focus more on dockers okay at that point of time we don't have to worry much on that okay so here it is uh i think the deployment has basically happened uh here what you can do is that you can also check the logs okay so if you probably go and click on logs here it will be showing if you have any kind of errors that are coming you will be able to see over here right now it shows build succeeded perfect now i'll go down and let's go ahead and see our web application so now perfect oh wow this is amazing and just in one click i was able to see all this application running fine and this is where my boston housing price1.heroku.app.com is basically working so let's test this let's test this so i'm just testing it i'm putting some values and let's see whether the prediction is done or not so we have now predicted so here you can see that everything is working fine the prediction is also working fine now if i really want to check the api also i can go ahead and check it i will go and open my postman okay i will change this entire url okay and here i will write predict underscore api okay so that api is also there over here and all these values are given i'll just go and click on send so once i send it you can see my i'm also getting the right bag okay and i can definitely play with any value that i want okay so here you can see 30.32 suppose let's say this is 0.8832 so here you can see 30.22 so all things is working api is also working and my flask app is also working over here and with respect to anything the logs will be created over here any errors that you are getting you can see all the print is basically happening you can validate it from here itself so perfect we have done the deployment it's quite amazing the first deployment in the hiraku the first end-to-end project probably in our course that we have done the deployment now we'll go ahead and learn about dockers and we'll see what will be the advantages if we use dockers and if we try to do the deployment also okay so yes i will see all in the next video thank you [Music] hello guys in this video i'm going to deploy a machine learning application into a cloud server like heroku with the help of dockers and github actions okay now get up actions whenever i say i'm basically talking about ci cd pipeline that basically means as soon as i commit anything from here automatically your deployment should happen into the server and before that i'll also make sure that i will try to dockerize this entire application that i've actually created now right now here you can see this is my machine learning application a simple machine learning application and this machine learning application is nothing but boston house pricing uh data set we have basically used so whenever i dip i probably have created a front end wherever i put inputs and predict it it is going to give me the price of the house so first of all let me quickly run this and again guys i've created many this kind of project video so you can definitely watch that but here i've directly written the code you know probably done the exploratory data analysis and all so if you really also want to refer this definitely refer it from the github okay but i'm not going to implement this because i've done it many number of times so first of all uh if i just go and uh write python uh app.py here you'll be able to see how my output will look like okay so it is running let's see so for for the first time i think it will it'll take some time anyhow uh i'm just going to run this 127.0.0 now here you'll be seeing that my application will look something like this okay so my application looks something like this so here i'm just going to put up some information like this okay let's say i'm putting up all the information like this over here and if i probably do the prediction and this is right nine localhost right and i'm going to get the prediction something like this okay now what i'm actually going to do is that i'm going to basically deploy this i'm first of all going to dockerize this and run it as a dockerized container okay and what is dockers and all i've already created a playlist on to that you know just understand that dockers saves a lot of time with respect to configuration setups and all right because entire configuration is basically made within a docker image and that can be run as a container anywhere let it be your operating system or something cloud server wherever you want you can basically do it okay so what i'm actually going to do again i'm going to minimize this let me just do control c now let me just show you first of all this step we will try to create a docker file now in order to create a docker file it's very very simple not that difficult to create a docker file itself all you have to do is that is within this just go and click over here right docker file and make sure that you use this same naming convention then automatically vs code will be able to determine whether it is a docker file or not now with respect to the docker file whenever we create the first thing is that understand what exactly is docker image here with the help of this file whatever information i am actually writing it actually creates a docker image okay and that docker image can be taken and it can be run within a container which we specifically say it can be run as a docker container in any operating suppose if i also want to run in my local it we can run this entire docker image as a docker container which will be interacting with the kernel of our operating system okay it is not like a virtual machine but you can just understand it is a kind of container which can independently run uh by communicating with the kernel of the operating system now in order to create a docker image first of all there are some commands that will be using one is from command okay now what is this from command i will discuss the next command is something called as copy command copy okay the third command is something called as work directory command uh the fourth command is something like run command and fifth command is nothing but expose command and then finally my cmd command now the firm command basically says that whenever now see why why docker is so super important guys suppose let's say if i don't probably create a docker with this application and if i want to give this same application for my friend to run it right so what my friend will do whatever installation i have done whatever setup i have done whatever library setup i have actually done he has to do all those steps manually right and because of this what may happen is that he may face some kind of errors or issues so that is a major problem over there like you have heard this saying right when when uh q is working right suddenly let's say when developer is working in the developer in in a developer machine suddenly deploy the code into the qa machine when the queue is basically testing they'll say oh something is not working developer says that oh it is working fine in my system so what is the main issue over here there may be some kind of configuration some kind of dependencies come with some kind of hardware issue or some kind of operating systems issues also right because let's say i am running an application in windows machine and suddenly i deploy that in a linux machine i may get some kind of issues over there also right so docker helps us to prevent that because here in the docker image we will make sure that we have all the base configurations set up then and there and we'll try to use that same base configuration in every machine we want to deploy it okay so first command is from now this command is basically used to select any kind of base image okay now as i said that see docker container also requires some base image base image means that okay we can have a linux operating system on top of that installed something right so suppose if i write from python uh 3.7 so what this is going to do is that i'm not going to use alpine okay alpine is again another different version okay of base image now as soon as i write python 3.7 when we are building this docker image it will go and take out the base image from the docker hub wherein it will take probably this python colon 3.7 basically means it will take a linux base image and on top of that it will try to install probably let's say a python 3.7 is installed it is going to take that particular base image and it is going to do the necessary other configuration settings so in short what happens as soon as i write from python 3.7 all it is going to do is that from the docker hub because all the images are present over there it will take that particular base image which has linux on top of it python 3.7 and then it is going to do the next step that is copy copy basically means whatever code i have in this repository see all these four files that i have in the repository i need to copy within that particular base image right all these files within that base image so for that what we will do i'll just say copy from my current location from my current location to a location which i am going to name it as app so that basically means i am going to create an app folder within that particular base image which will be copying all my local content to that particular app folder okay and then the next step is that i'll create that as my working directory so here also i have to give my same location okay this is super super important three steps from i've taken my base image i made sure that i copied all the content all all my code from here to a app folder inside that particular base image and i'm making that as a working directory now the next thing is something called as run now see guys here also whenever we learn any machine learning application there will be some dependencies in this particular requirement.txt if you are doing a javascript project you probably have to install some of the packages and all so i need to install all these things before going ahead so for that i will be using this run command now inside this run command i'll write pip install minus r requirement.txt right so this will do all the installation order dependencies will get installed over here finally i will go and expose now see inside my docker image when that docker image is run as a container right in order to access the application inside the container we have to expose some port then only will be able to access that particular entire url right because from that portal you will be able to access that application so we are going to expose a port within that particular docker container and that port i will just write it as a placeholder which is called as dollar port why because this value when we are deploying into the cloud or server right it is going to the server is going to automatically or the cloud is automatically going to assign this particular port in that container okay so this is the next command and finally i will run my command which is basically used to run my web application or in this particular case my entire applications for this i'm going to use g unicorn okay g unicorn actually helps you to run the center python web application inside the heroku cloud itself so here i'm going to assign four workers workers importance is too much right workers what it does is that it uh whenever a request is coming into the application it will divide based on the instances okay this let's say a thousand requests are coming so if i'm using four workers it is going to take 250 requests with one 250 another like that parallelly different types of just to makes that particular process uh easy right then i'm going to also bind now this ip address will be the local address in the hereku cloud and then i'm going to assign with port and this will basically be my app file along with that app file so see over here this app file is basically the file which is where i have to run my application inside this this will basically be my app name right so that is the reason why it is written app colon app app colon app basically means we are just going to take this particular file inside that we are going to run this okay so app colon app will be basically the entire process okay so we are just trying to run the first file over there and over there app is basically my file name that is present inside this okay so that is what we are going to do so this is what is the entire configuration over here what this binding is doing this is super important guys see understand about this particular binding very very important this port number whatever we have exposed in the container that will be getting binded to the local ip address whatever local ip address we will be getting in the heroku club right so that particular local ip address so it's just like a local host at a 0.0.0 i know it is 127.0.0 in our machine 0.00.0 we can assign it over there and this port whatever port heroku is uh assigning in the container will be able to access it over here okay very much super clear very much simple g unicorn is definitely required whenever you try to deploy anything in the heroku cloud platform okay simple simple and easy okay now this is done this is my docker file now the next thing that i'm actually going to do since i also need to make sure that i have to configure my github actions okay now in order to configure the github actions whenever you want to configure github actions considering ci cd pipeline two folders needs to be created one is dot github and the other one is something called as dot workflow sorry workflows okay workflows now why i have created this file because as soon as i deploy i push push this entire code into the repository github repository you know when they sees this particular thing github workflow and inside this i will also create a file which is called as main.yaml file this will have the entire process as soon as i commit what all things we need to do first thing is that we need to build this docker file build this docker file basically means the entire image needs to be built and then we have to push this image in the form of a container to the heroku platform okay so the entire configuration over here will be set up in this main.yaml now guys uh this kind of main.yaml we don't write everything from scratch they're already available some people have already written this i'm just going to copy and paste it over here but just understand what is my workflow this main dot ml will define the entire workflow okay so here i'm just going to say your workflow name this is deployed to heroku for in which branch it is basically on as soon as from here any push command goes on to the main branch then what will happen is that this entire build process will start main thing we are trying to do is build push and release the docker container to heroku okay so it will run on the ubuntu latest it will take a ubuntu uh operating system and do this entire process over there but here you'll be seeing that you require three main information one is here email here api api key and heroku app name now this three information are super super important now the thing is that where do i get this specific information obviously from the heroku itself right so heroku here you can see that i have logged in let's say i want to deploy inside this boston house pricing one okay now if i want to get this information this are my secret keys that is available that needs to be provided in the github actions so if i probably go over here just a second if i go over here and probably let's say i'm going to my hostel uh boston house pricing now if i go to settings okay in the settings there will be something called a secret because i have to make this as a ci cd pipeline as soon as i push my code to this github repository so i have to add some secret keys now click on secrets click on new repository secret and here i will basically write my new repository secret the first repository secret is basically let's say i want to add heroku api key okay so i will go over here copy and paste hello api key now where do i get my heroku api key just go into this dashboard of heroku go and click on account settings and here if you go down here you will be seeing api key just reveal this copy this entire thing and paste it over here okay so i'm just going to paste it over here so this will indicate that which hiroku app is my information like i want to deploy i want to sorry this will indicate which account i'm actually using in hiraku okay so as soon as i click this a secret key is adding added but we need to still add two more secret keys so i'm just going to click on new cpr uh new secret key over here and now my second secret key will be what uh it will be hiroku email the hiroko email will be the same email id that i'm using for my heroku uh web app uh sorry dashboard so it will be krishna zero six at the rate gmail.com this just to give this configuration so that my deployment will happen successfully okay as soon as i do any push to the github repository so this is my hiracu email now similarly coming to the third one it is heroku app name now here app name is super important because in which app i need to display deploy my code right so here i'll be writing europa app name and again i will go back to my account and let's go back let's say i want to deploy it in this boston housing pricing right so i'm going to copy this i'm going to paste it okay i'm going to paste it over here now here i will just go and add the secret so this all information is perfect uh we have added it now what will happen see we have set up the github actions that basically means as soon as now i deploy anything over here right right now you can see there is no github workflows folder only it is present over here right now as soon as this folder has been seen and whatever file is basically written over here all these things will get deployed all these things all this process will automatically happen in short the build push and release of a docker container to heroku will happen automatically from the github repository itself okay and this is super important this build push and release because there will be many people who will be working in a team right and many people may commit multiple things right and every time if a bullet build push and release actually happens then every time you'll be able to understand what error is actually happening uh someone may do some kind of errors over there so every time this build will be a very important step to go with right this is what we basically follow in real world industries where we are specifically moving our content from development to staging staging to pre-brought pre-brought to production you know so with this kind of containers we'll try to create and will try to push it okay so this is done now let's go ahead and open my terminal now as usual i'll go to my environment and now i will add everything that i have actually done so get add if i probably see get status you'll be seeing this many number of files has been created okay and i'm just going to push this okay let's see whether i have missed anything as such no no no no no nothing no no no no no okay everything is here perfectly fine okay now i'm going to just push everything oh sorry before that i need to commit the snapshot so i'll write minus m now i'll say dockers and github action changes commit okay because i have initiated this so this is done now all i have to do is get push from origin to i don't have to write to push from origin to main so you can see that everything has got done now as soon as i open github right now see what something amazing will happen okay if i reload this some orange color will come over here see okay now let's go and see the commit so here i will go and click it now you see this entire details automatically this entire deployment is happening okay something went wrong input required and not supplied hiroku app name so something wrong is there so let's go and see what is the thing that has gone wrong hiroku app name i had actually provided it but i don't know what happened okay okay see app underscore name should be there right so let's do it again i'll add a new repository secret okay new repository secret i will go again back to my optional over here let's see this was the entire app name right i wrote only app underscore name okay so now here app name is nothing but house pricing dot this okay so because of that it failed see this because of that it failed right we will fix it don't no no need to worry we will fix it okay so now i will just copy this and i will add my secret add secret perfect now let's build or rerun this rerun this job rerun all the jobs okay a new attempt of this workflow we are including all the jobs yes we run these jobs okay either you can rerun it or either you can do this okay so now you can see build has actually started okay so the build has started the job is started here first of all it has taken the ubuntu whatever configuration we have given now see every steps even building of the docker will also automatically happen right every steps will probably happen over here pull commit there's download verifying checksum see this copy process is again happening working directory changes is automatically happening this is quite amazing guys this pip install is basically happening and all the steps are happening you can see over here right all the requirements are getting installed see this build process you did not do much anything right automatically these things are happening that is the power of cicd pipeline that is the reason why devops and mls jobs are heavily required in every companies you know this process with respect to every project you have to do right so this is going to take some time anyhow i will just uh wait till all this installation will take place and then we will again start the installation okay sorry will will uh automatically see at the end of the day when this entire thing will happen it will get pushed to this particular house turn hostile sorry boston housing price app itself and then once you open the app they will be able to see the output okay so let's see okay still happening layers already exist uh pushed pushed pushed that is going to probably take some time because there are many files many installation will take place uh this entire process will get run if you get any errors you can again rerun it try to always get errors if you get errors that basically means you will be and if you are able to solve it trust me you're able to learn in an amazing way okay so post post post now i'm just going to pause it and once it is pushed we will try to see the application okay so i'll just come back in some time until then i'm going to pause oh see okay releasing container okay releasing container all the steps you will be able to see and please note down all these steps guys these steps are super important whatever things we have written in docker everything is happening because this is now getting created as a container now finally you can see my job is completed i'll go over here open the app let's see still nothing is working okay you can also check logs if you are probably finding any issues in boston house pricing okay and tada it's here and let's now go and execute it okay let's go and execute it here you go here you go here you go here you go and now this entire thing is running in a docker container if you don't believe me i'll show you that also i'll show you that proof once i predict it this is my output it's working absolutely fine if i probably go over here into personal here you'll be seeing now it is running as a container see before if you just deploy a python application it will look like this but if you are deploying it as a container you will be able to see like this okay so i hope you like this particular video of deploying your data science application into heroku cloud with the help of dockers and github actions [Music]
Info
Channel: Krish Naik
Views: 272,100
Rating: undefined out of 5
Keywords: yt:cc=on, end to end machine learning project pdf, end to end machine learning projects with deployment, end to end machine learning project github, end to end machine learning project ideas, end to end machine learning project steps, end to end machine learning projects kaggle, end to end deep learning project, end to end machine learning projects python, krish naik ml models deployment
Id: MJ1vWb1rGwM
Channel Id: undefined
Length: 164min 56sec (9896 seconds)
Published: Tue Aug 30 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.