Azure ML orchestration and deployment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] do [Music] hello everyone and welcome to ai422 hello eve hello leila how are you hello thank you to have me again thanks so much akon and eve thanks so much so we should say here for the viewers that this is actually a part two of a series but i think that you will get quite a lot out of it even if you haven't seen the part one but if you want to see part one you can just go to our youtube channel and you will be able to follow the first part in this two-part series how are you eve thank you for asking and feeling amazing i i lost my voice in las vegas but otherwise all good maybe i should ask you if do you know what day it is today it's a special day today oh it is the 14th of december is it yes no 15 actually that's the 15th actually but it's also the day where we have our 21st session for af42 for this year wow so that's something to celebrate yes lovely it's great good news thank you okay so i think um now uh we can just have a quick uh introduction here to ai 42 before we continue so let us do that [Music] yes yes welcome to a42 and you know the motivation here for starting a42 comes from the recognition that we believe that there is really no good starting material for getting into this field here so af42 is a strong team consisting of three microsoft mlps so it's me and eve ippori and then gosia borsika she couldn't be with us tonight but usually she's also here so what we do is we provide you with a valuable series of lectures that will help you to jump start your career in data science and artificial intelligence and the way that we do it is we provide you with the necessary know-how so that we can help you land your dream job as long as it's related to the fields of data science and machine learning and the concept is quite simple so what we do is we have professionals from all around the globe they will explain to you the underlying mathematics and the statistics and probability calculations and also data science and machine learning techniques and we will guide you through all of this so all you have to do is follow our channel and enjoy the content every second week so it will be filled with real life cases and also expert experiences and don't worry if you think this sounds difficult we will guide you because we have also started from scratch so we know how it feels and you can always stop and rewind the videos and you can ask for clarifications in the comments section and we will be able to help you and we hope to assist you on this wonderful journey and have you as a speaker on our show one day also and we believe that by creating these types of cross-collaborations with other organizations we will be able to give you the best opportunities so that you can broaden your network both in the ai and in the data science communities and with the combination of our offered services we would also be able to support less fortunate people and organizations that are not recognized yet even though they deserve it and our shows are sponsored by microsoft and mayas and we wanted to say a big thank you for that also and we are humbled by the lot of help we get from our contributors as well levent levantaponger makes us amazing graphic designs for our slides that we showed during the stream and that you can see on stream yard as well and me and murray is creating our intro musics before each streams and we are in close collaboration with c-sharp corner and global ai community so you can see our shows on their own youtube channel as well uh in advance to what we show on our media and nicola taught creates or text content and also reviews them that we use for during the sessions or during sharing something on our web pages or our social media you can follow us on facebook instagram and twitter so you can find some information about our upcoming sessions or some other fun information and if you want to refresh our videos you can go to our youtube channel and if you want to be updated about our upcoming sessions you can go and see that on our meetup page yes and we also have a code of conduct so our code of conduct outlines the expectations we have for participating in our community as well as the steps for how you can report unacceptable behavior and we're strongly committed to provide a welcoming and also inspiring community for everyone so be friendly and be patient be welcoming and also be respectful with each other and you can find our code of conduct here on this link here yes and i think with that we can go back to our speaker yes [Music] yes hello again here so i think it's time that we can actually formally introduce layla here or what she will talk about also in her inner session here so layla is a microsoft ai and data platform mep since 2016 and she was also the first aimp in new zealand and australia and she has a phd in information systems from the university of auckland and also layla is also the co-director and data scientist in red the cad company with many clients all over the world and she is also one of the bloggers of radhacat with more than 800 articles and more than 9 million readers around the globe on an annual basis and also the co-organizer of the microsoft business intelligence and power bi user group the sql saturday auckland the dfinity conference and the auckland ai global community and she's also an international speaker at most microsoft conferences and has facilitated over 200 sessions and full day workshops and she also loves solving jigsaw puzzles playing board games and her comp her dog's company so we are very very thrilled to have you here with us leila thank you thanks so much hakan and eve and also kashi i know you are a very good team i was really great to have one session with you guys last about two weeks ago i believe and now i'm going yeah that's that's my pleasure thank you yes and we are especially humbled because it is quite early there where you are right yeah so kind of uh but it's good actually this is 5 a.m in the new zealand so i think in rest of the world is 15th of december here is 16. and but it's it's fun kind of it's one thing to speak about ai and kind of be with the community ai community people so it's fun for me thank you and it's really fun for us as well could you give us a little recap of what we talked about last time yes sure so last time we actually we talked about them i'm sorry just going through this step so we're going through the overview of the azure machine learning so we see that uh what is the machine learning and workspace we have a very quick look on automated ml and also azure designer and notebook that's how they are so today we go a bit deeper to azure designer first to create a pipeline and then deploy it and use it inside a code like python code or it can be a chart code and after that we're going through the how we can create a pipeline using the python inside the journal notebook so we're going to look a bit deep on how we can actually create a web service and create a pipeline through that so everything become kind of as an orchestration and it's become end-to-end process in a one kind of the kind of one structure yeah i'm really excited because this is one of my favorite tools actually on azure so i can't wait to see something about it so oh thank you thanks so much thank you so much again thank you yes yeah so we'll leave the stage over to you layla thank you thanks so much [Music] cool so hello everyone uh welcome to my session uh about azure ml orchestration and how we can actually create a web service through that so as actually mentioned uh i miss in oakland new zealand and all of introduction has been presented but if you have any uh question after decision this is my twitter and my email address so i can actually in touch with you and if you ask me question if i know the question i will answer otherwise i'm going to search and it's good for me to hear new questions because it's become a new idea about what is happening uh thanks so much everyone for the thanks for coming joining the this is not my last session for the 2021 is actually i have another one about market basket analysis in next six hours and that's become my last session for 2021 so um now i'm going to talk about actually you remember we talked about azure ml designer so in this day we are going to look at that how we can create a journal designer uh how to train the model and then after that how we can actually create a web service out of that and use it so in the first part we are going through that so i assume that you have some knowledge about how we work with algebra space but still i will go through that one so let's go through directly to the azure ml so if you remember we kind of we start we create a working space that help us to uh kind of create the human experience through that after we connect to the workspace over here i'll talk about notebook and azure ml kind of automated ml and notebooks today i'm going to talk about one of the scenario that we have and that is a creating model here is for some of you maybe it's familiar so i'm going just very fast on the model that i've already created and show to you so but before going through that let me create a simple one and then i'm going through there talk about that and just i want to share um actually some resource for that so before i'm going through that so there is a azure ml fundamental examples that provide really nice examples about using automated events there are workshops through that about using azure email designer for aim of recreation classification clustering i found them is really step by step and is really helpful i'll share the link through that also i will share the link with a eve and hakan so they can actually share it with you they are really nice when i think for people who just starting or they actually familiar with these tools it should be great so let me share it on the private chat and and they will share that one with you guys so the first step that i'm going actually to do is uh just uh bring some data set aware here so there are some actually you can bring your own data set or you can use some sample data set over here so that i use the automo automobile price data so this is a data set that we actually we have here so as you can see you can preview the data over here what is the data about and they are about 26 column 250 rows so as you can see here you can drag and drop other things through here so for example i just do some example over here i want to select a column over here i don't want all of the things um to be here so i just connect the output from there to the input over here and we select some of the uh parts of the data that we need so i'm just going to edit the column and i have i can search by a rule like columns name and the other or i can actually search by name i think we need everything expect the column that talks about normalized lows so we don't need that one so all together we need about 25 columns so that's a one so you see that i don't write any code all of these steps is happen here i can um clean missing data so here i can do the cleaning of the missing data and this is so similar to azure ml studio that we used to have so for example uh here again you can do kind of the things here so i think we have missing value on boar and stroke i believe if i'm right one of the things that we have um okay let me i think uh maybe i need to kind of run it and then sit sometimes a bit uh kind of not able to show that one because we didn't run the code so here we want to both the score and the horsepower so that's the one that we need so you can see that is actually there are different ways of doing that you can substitute that one or you can remove entire rows and you can actually kind of do the all of the data transformation over here so uh this is a process that we're creating because this take a bit time so i'm going to show you the one that i've already created so uh let me go through the one so this is the one that i already created so this is my data i select some of the columns over here as you can see if i just click on the edit you can see that i do that and then clean missing data so just some of the value that has missing value then i bring all of the data into one escape because one of the things in machine learning that because most of the algorithm using the distance space for example you see the undistance or other if we don't bring the data in the same scale uh it can impact on the algorithm quality so it's recommended if you have numerical data and you're using regression algorithms it's better that actually you bring your data in the same scale so some columns maybe you are in the range of for example 0 for example 0 to 100 another one can be between 100 to 1000 or between 0 to 1 so it should be that everything to be in a 1 scale one of the approach that we can use is min max that bring the data between zero to one but actually we have z score logistic and log normal and other so when we use these things is actually is slightly uh can be different from the algorithm that we are going to use so there are some rules in the statistic uh kind of concepts that then use which algorithm however so with this one i just use the simple one that was a min max so these are the kind of the columns that needs to be transferred differently we couldn't transfer columns that they are categorial or they are actually takes so it should be a numerical value so these are the data transformation top as you can see no very simple one and always recommend that so yes in here you if you want you can apply some sql data transformation so yes this is possible you can bring your code over here but the environment for us python or sql that we have over here also we have the execute r over here also actually use one of them later so for example for python yes there is the environment that you can write for data transformation if you couldn't find any data transformation here using r python or sql but if you it's better the best practice is actually you clear your data before bringing it here so at least the data has been gathered for example if it's can connected from different resources better bring over here and then do some primary uh data transformation for machine learning over here not just for cleaning the data it's better to be before loading to this environment so and after that uh so this is our cleaning data and then we're going to split so as you can see 70 has been used for that one so uh just an overview if you uh forget about azure ml studio so the uh training the 70 percent for training goes from here the note located on the left side and the other one from right side for testing so this is a very simple one so these are the model that i have i have two algorithm here uh linear regression and fast uh forest quantile regression so they come here we see we kind of train them with the data set that we have over here and we tested it and evaluated that one so after you create this one so it's the time that actually you click on submit and it's going to run your experiment when you click on submit is actually is ask you about the compute that you need to kind of you need to have so it's actually about the compute instance before i'm going through that let me show you that if you remember from previous session just copy that one we have a couple of different things we have compute instance and compute cluster so compute instance is mainly used for the training of the data um attach compute is mainly used for when you're using data breaks connection between azure ml and data breaks compute cluster has been using azure notebook when you want to instantly run and predict and use that algorithm and predict an inference cluster can be used when you're going to kind of um do the in the iot edge and the others so for this example as you can see here when i click on submit it's going to run on the default as a compute instance that i have over here as you can see is running is a virtual machine um that's actually you can see the specification over here so you can select a new one or the current one and then click submit so it's going to run all of the steps through here so after you kind of done that one so this is already submit for me i can just do it again sorry submit that one let's take a bit time to submit and so after that we are going uh kind of create a kind of we already created training pipeline now we are going to kind of create a create a web service out of that so we can actually use it on the other so as you can see when you submit that one you get two inference pipeline are there here let me see if i can zoom so you can see it properly over here so here we have to create real-time inference pipeline the one that i'm going to use or batch so batch is actually when you are going to send for example a file for the prediction or the other size a batch file but in the real time is just a request and respond just one input send and one receive so we are going to use that one because this one is actually the organizational one and we may need we don't uh with the gpu to run that but for this one is actually uh help us to kind of create a simple one so i'm going to click on real time inference pipeline is provide a message and to kind of do that so yes for that one it has um two um kind of thing so i'm just very fast remove the other one so let me remove that one so i'll just stick with the one again i submit and then we are going so you need to choose which algorithm you want to use over here so here as you can see i have couple of them so i create them to simulate the automated ammo to see that which algorithm has the better evaluation so now is actually i'm the time that i can create the inference cluster so just wait till this creates something for me okay so now is actually as you can see here is going to create a real-time inference is a bit different from what we have from previous so how we can see them let's meet back to the experiment tab so we see that what is happening over here so as you can see um today so we have experimental regulation over here and under that if i just click on that um we have a kind of the create a pipeline for that one let me back to the designer i think it's better to showing here so you see that is actually this is my training part so again i can back to that one so it's still over there i can change that one um whatever i can do but also is create another one for me that is the real-time inference that is a bit different from what i have over here let's see that what is the difference that we have here so we have web web service input over here so it's actually is a place that you can bring that one and here we do this select we do apply some transformation over here this is all algorithm and that we have so you as you can see you couldn't change the algorithm it's become like a package it's become as a black box over here so it's kind of that one so these are the steps happening through that and you have a web service output so we don't need some of the components over here because this is not any more training part so the one that you can see is actually encapsul everything into some packages and provide the input and output uh schema for that and calling the web service so definitely we don't need the evaluate model so i'm just recommend to delete that one and we don't need to the original data set we just need a sample data set that we're creating by enter manually so you need to remove that one as well and instead of that i'm going to create the inter manually data so i just copied that one through that so i just provide some data over here so this is just uh samples of the data over here to just create a schema through that one so this is the one and then i connect it through over here uh we need to uh kind of also select uh kind of edit that one so here we kind of for the selection we need to change some of the parts of the data over here so here is not any more prediction so when is not a prediction that means that uh sorry it's not any training for the training um because we want to predict the price so for the previous pipeline we include the price but for this one because we just want to predict the price so there is no need to actually to include that one so i'm just going to exclude the other one that is price if it's showing over here for me yeah and i think that should be okay so just that one so that's another data transformation again here is not a training for the because it's just a prediction so we want to predict the price so we don't know what is the price so we need to kind of exclude the price from that one so another one that we have is actually is after the score model so this actually we test the model but to show the output to the people it doesn't show me the output now because it's not run over here but we need to put some of the variable as an input and some as a output so we need to change some of the parts of that one so i just want to show the predicted price and the score label i don't need the other one so to do that uh there is a code that you can actually use as a python card so i said it is a cute python code over here and i'm going to kind of transfer the output from here so i just remove that one and kind of let that link and do some data transformation uh for showing what we have so this is the code that i'm going to use so just click that over here let me share the link that i use to doing that again so they can actually leave that one thanks so much for sharing that one and so yeah i think everyone got that link over here so this is a link that we have so here what we do actually we get our data through here and then as you can see we create a data frame um kind of that is has a score label so the output just should have a score label and predicted price so this is the one that you have uh so you can see that i use just python for the simple parts we never use it for their kind of the whole process but they still if you have specific algorithm you can use it so here i use python just for data transformation so the output i just kind of as a select i use it as a select approach to just show the score label and predict that price so just back to the what we have over here i think we should be good to go so i'm going to submit so i'm just going to submit that one to first run it and yes select the existing one that is recreation and on the computing stance again so i just click on that one to stop it so it shouldn't take that much time so it's going to actually do that so this is a bit miss confusing for many people because it has the same structure of training but this is not the training part when you are in the real-time influence that means that you are setting up what your input web service should be look like as you can see here and what is the output should be look like and these are the uh kind of the encapsulate packages that has been there so we can call them as a pipeline over here as you can see that all of the steps about what is them and manually data should be look like what is the web service should be look like and everything is actually will be written here so this is the steps that we already train our model and we are going to use that model for the prediction so that's a one it may take up a couple of minutes and then we ready to go through that one so as you can see all of the steps happen parallel to each other and if you want to see i can show you the actual steps again what i'm doing and i'm going to look at the modules that is part of the ai 100 sri 900 exam that is creating a regulation model with azure machine learning so all of the steps about what compute you need to use and the other information has been aware here so about the explore the data and everything you can see over here so um the one that links that i share to you uh it is really good one so i really recommend to follow this one because this is a one and that's among the most of the um training material that microsoft have i think the ai 900 one and is it one of the best really good design really step by steps and you can easily get that how it's actually work okay so just wait a minute i can also check the questions before i'm going ahead so thanks so much for sharing the sessions over here the links okay cool okay so after is actually uh kind of um how is actually after is finished uh we are going to kind of the test that we are going to see that how actually we can use it in other applications so i'm not going oops i think there's some problem on the one um i think i missed some of the steps over here let me see what was that so i just need to stop it okay um sure that i maybe miss some of the parts over here let me double check and i think it should be price let me double check maybe i made a mistake over here let me check the data i think it should be let okay check the price maybe i shouldn't include the price over here so just be careful about what we put here is a very kind of i think i didn't put maybe i didn't keep the price over here yeah there is no price over here so it's becoming a bit confused through that one so that's my fault actually doing that i'm just going to remove that one and let's hop it again so this is a process that actually be doing so it's going to submit the process again and let me double check that actually i can use the other ones that i already have it before i think that's kind of a bit much better let me see if i create one before yeah he's still running let's see that if he's going well i think it's running here okay so after we actually we done that steps now we are going to kind of deploy it so uh what i'm going to do to to shows that one so i'll just keep it here and go to the notebook and i just copy a code over here to test it so here you can add a new file so create a new file you can call it as a test regression and this can be a python code over here so it's a notebook item over here and so yep so just type one and then just create that one so we just create that one we are going to kind of test the code over here and see that how is actually created but before doing that we need to apply some of the steps over here so just waiting till it is done okay so so after actually this process finished be able to deploy it so when we actually we deploy it we get an end point over here and let me see if i have a one from the uh actually previous one so you see that after you deploy it click on deploy you get um actually under the end point if you can see here we have end point over here so under the end point if you click on that one this is for the another example you get a consume and under that consume you get the rest end point over here that you can copy there and use it to doing that or you can able to test it over here so this is for the titanic example and you can see that actually how you can test the data over here so if i just put it here so i can just simply test it and it's provide the results so these are the tests for that muscle the same process will be happen over here so i'm just going back to the designer that we have this is still running or just uh yep yep is we almost there and then after that we can actually consume it in our data so this is a very simple code here we have an end point and we have our keys and this is the sample data for the predicted price of the car and just we can see the result over here so just the last part is going to be done okay so i think it should be finishing a couple of minutes over here and then we can actually click and deploy it so just doing that so whole of the process is actually because it's going to run and check everything is that that's why we call it as a pipeline because all of the data process of the creating and testing happen aware here so if you want to look at that so here we're going to connect to the workspace we created training script so all of these parts is going to be encapsulated in part in one part so and then we are going to kind of run this script and create a one so let's see that how is actually process done over here i think almost finish yep and then be able to deploy it so this is a very uh this is not the end-to-end pipeline so the problem that you can see here is actually you already created machine learning is going just to pipeline your data cleaning and also your machine learning process and executing as a web service so it's not the pipeline that actually parameterize the algorithm or the other but still is encapsulate some of the steps that we already have so about the algorithm about the some of the data transformation like normalize the data and everything has been here so whenever after it's done so be able to deploy it so you can click click on that one call it as a regression price um auto in price and so here you need to um kind of yeah it should be lowercase that's my bad same as the most of azure thing so it should be lower space so here is actually you have to compute type so if you using it for the kind of the batch processing and you want to use it in the organization or the scale so you need to use the uh kind of core corporate aka i call it aks is similar simpler for me or aci so aci is more than you actually you just want to deploy it for your local uh things and it's not in a huge scale so if you are going to use it in a kind of the deployment one you need to use aks but for this scenario i'm just use that one uh here you can enable the kind of the if you want to see how many user used that one you can enable the application inside provide a report that how many times they use your endpoint you can enable the ssl i'm not sure about that so i need to double check you can check the cpu reserve capacity and the memory of them so i think that's the whole things we need that so i just click on deploy and it's going to kind of ready to be deployed it's take a bit time so you can see that actually and kind of take a bit time to deploy it so just a deployment start so it's going to create a real-time endpoint for you that you can see the end point over here so is a start to creating is not complete yet so it's a start to creating the end point over here as you can see still not created over here so it's take a bit time to create it but after it's actually created you can use that one because we are a bit one of the time so i'm just show you that actually after it's created you can use it here and kind of or in any other application so uh just share that one so this is a process that you can see till create that one but it's not just related to that so you see that we encapsulate the prepare the data we can train the model we create a package of the model and we validate the model deployment and monitor the model so this is a process that you already created and you want to kind of encapsulate everything so everything run together we call it pipeline so pipeline are a key to implement an effective machine learning once or mlabs is a stand for machine learning operationalization solution in azure so steps can be arranged sequentially or parallely so for example if you run couple of algorithm at the same time the data processing can be one step but they're running the algorithm and creating one can be in a separate as a parallel process so so each step can be run on a specific compute target this is a good point of that because this actually help you to do the scalability so you can have a couple of different visual machine and each algorithm can be run through that and this is a power of that that's actually help you to run different virtual machines as different algorithm on different machines that you have so um so pipeline can have different meanings of any truck pipeline for example in a skid lane pipelines combination of data processing with training algorithm but in the azure develops it can be a bit different so when we called pipeline is not just a one um process this can be combination of the process um so for the common approach in azure machine learning pipeline like uh run a specific point in python scripts it can be using the azure data factory before that to kind of get the data and from the data source this can be run a notebook or a scripts on the data breaks after we've done something in azure ml and it can be run a new sequel job or the other so it's not just related to what you see in the azure mls can be other jobs that we have in azure to prepare data or deploy our data so uh these are the example of the running that for example for the first one we have uh kind of prepare the data so we prepare the data we uh kind of have a have a kind of the text that shows our data preparation in the python language and we specify what kind of what target machine or compute machine we are going to use for the training part again we have a model that is trained model that python that's a second one so these are the steps that actually we specify and after that we kind of include them like here so as you can see here we are so this is step one that is data preparation step two was training the model on a specific compute target and then we are going to create a pipeline cluster on the step one and step two so these are the process that we have and then we are going to run that train pipeline so these are the things that actually remain used let's have a look on one of the things that we have over here quickly and see that how is actually work again and there is a link i'm going to share with you guys that's also really interesting one uh under the um for the exam dp 100 uh the guideline and the learning path is really interesting one i want to share that one with you guys so let me see that if i can share that one okay so i think that's a pipeline i'm going to use let me share the same one or not and let me that one and just see the same one or not yeah it should be on that one pipeline okay see which one i use it here the pipeline yep it should be that one oh yeah here sorry so this is the example from there let me share the link to this one is already published into the github let me share that one with you i think that's not this one let me share the links so you can actually use that one see this one i'm going to share it through link over here okay so um so here is actually the pipeline that we have so i'm going through the code so i can see the question gabriel has as a data scientist how much knowledge of coordinates or web dev do i need so um so it's actually vending so we have couple of roles so we have data scientists if they are pure data scientists so they just create the algorithm these parts we call ai engineer so ai engineer is people who actually knows the algorithm but actually they should know which uh kind of uh which server uh how to use aks or aci through that so yes for that one you need if you works as an ai engineer as a role not just as a contract role i mean that if your role in your organization is as a person that you need to deploy everything through the azure yes you need to know and i think there are links that i'm sharing with you about dp the microsoft data science and also we have another exam that is a uh i think 100 that also provides a really good understanding and true that yes you need to know and uh so ask me about which tools you are using most in your daily consultant job notebook automl and designer to be honest i can't if i want to rank it notebook and then automated ml the designer one i mainly use it when i want to provide a fast demo to people people who don't for example a prototyping you want to provide that what you are going the process because for the customer is really easy to understand through that so this is for me maybe for other people's can be a bit different so here as you can see here so these are the process that we go to create a pipeline so this is the library that we have azure email.com so we kind of i don't run it because we have limited time so i already run it before that so we connect to our azure ml then we prepare the data so this is an example of the data about diabetics there so if i want to show you under the data we have a kind of the diabetics data sets over here let me close that one so as you can see we have some data about diabetics that is going to get the data from our data store so is actually using that one prepare the data over here so that's not big deal it's going to create a pipeline steps through that so it's defined and actually a pipeline named diabetics pipeline to create different steps so as we can go further so we have a the first uh pipeline that we are going to create is to prepare the data for the input data so it's actually this is the first one that we call it as a prep data so this is the first one is going to create as you can see here is going to kind of do some data transformation over here normalize the data uh kind of find the log proceed rows save the prepared data so it's going to prepare the data and add it through the pipeline so uh the other pipeline that is going to create is training data so it's going to create a training pipeline through that so again prepare the training you know that is we are going to test and train uh for the created data set for test and trend is going to create that one identify what is the accuracy and whatever should be so it's a training one so here uh so it's actually is going to define the compute which compute is going to use so that's another pipeline we have and after that is going to add that one through the pipeline that we have so here is actually this is our pipeline that we have that the compute has been allocated to that so you can see the pipe uh kind of setting here so we have the step name prepare the data that get the input data prepare the data we have a step two that is going to train the data if you remember we have a argument that was the training the data that's our second one so step one and two has been created over here in the next step so these are the main ones so this can be different one so it's not just limited to training and testings can be conclude other parts are there here so that's a one after that so i'll just go through that so you run that one and let me show you here uh it actually shows you the pipeline as a as a graphical interface over here so you can see that this is a pipeline that i have so my pipeline if you remember from the uh what we have over here it has two step one was prepare the data another one train and register the model similar to what you see over here so these are the pipeline here so there is a graphical interface about the pipeline that you can check over here and it shows you that so maybe i have other one like create the scripts or getting the data from azure so you can see all of the pipeline over here so these are the two main that you can see through this link over here when he's running and so i'm just go through the end sorry just going through here and after that so similar to that one is actually create that one create a model for you and it's kind of uh you create an endpoint and then you can actually schedule the pipeline so you want to all of these process happen at the same time you can says i can be weekly daily only so you can schedule that one and that's be done so this is a very simple one but this can be more complicated than is combined with the other azure component aware here so um this kind of the uh process that we have over here so you see that we connected the workspace we created training and so that was the first step we do we created training script so the second part was actually register the model uh we provide the steps one for the training one for the uh preparing the data in order for training this can be the next steps can be for the deployment like creating the web service or it can have some pure one like connecting to the azure data factory and the other ones uh of course you can create a each time you run the pipeline this can be create a new version of the model so these are the actually the process that can happen here so it's a start from creating a folder through that then you create a pipeline steps for training and the other and in that one that pipeline can be sent to it create a docker image through that and is run to your compute target that you have or this can be actually month to your compute target through that so to possibility and then is in deploying as an output so this step that i showed to you is actually was these steps and but after that you can create the image of that and submit it to your docker so these are the kind of the very simple process if you're looking for the complex one i can send the link later on about that but this is a very simple process how to create this pipeline and how to kind of without and doing each step manually you can actually run it in that one part um i just sent some of the links through the chat over here any any questions any things from this [Music] so hi lila it was really cool again today with you thank you and yes we have some questions from the audience so let's see the first one is from gabriel as a data scientist how much knowledge of kubernetes or that development do i need to deploy machine learning inferences models in azure i think i replied that one so yes as i mentioned and so yes you as the ai engineering you need to know the kind of knowing that one i think i answered that question yes actually we have a new question here also from gabriel he says do you have any recommendation on how to select the compute instance for running experiments to balance between performance and price per hour so it's kind of uh so if i back today there's a good question actually there's a computing instance over here so um let me actually stop it for now but i'm going to create a new one over here to show you so for creating a computing stance so when we go through the configuration settings so it's like a virtual machine that we have so uh the one that i choose here uh actually this is a very standard ds3 v2 is a minimal of them so it has four core four gigabyte ram and two 28 gigabyte storage but is actually it depends on your scenario so if you uh kind of they're using lots of neural network or is a huge one so i recommend to actually to create one so as you can see here i choose cpu i didn't choose gpu so if you want to go for the huge you know data going and pass they want to do the prediction in a huge scale and your organizational form or you have image processing and voice processing is definitely better to use the gpu one and then on this cpu one again you can select from other uh the one that i can see that is actually is if you have uh for this scenario i'm just use this one so it's just a two core it depends on what you need actually there is another question do you have any tips to prepare for dp 100 data science association certificate i can't say that actually for that one the knowledge of so ai 900 exam that we have is fundamental that means that is easy another one that we have is ai 100 exam that is um mainly regarding these things about um let me show you yeah things this is ai 100 exam so as you can see the title is designing and implementing an azure ai solution so it's mainly going to look actually um about the let me they change it to lila sorry i just a quick comment i just uh finished these exams so the ai 100 is mostly cognitive services and the dp100 is actually about data science and pipelines yes exactly i just want to show them the oh here uh okay so here so uh yeah i i passed it about a couple of months ago so that's that's the thing as we mentioned here so i think if you want to know the ai thing so yes this is the parts about the azure ml parts about the technical part over here but as eve mentioned dp100 is mainly focused if i want to show you is mainly focused on the algorithm side so to prepare for this exam so as you can see i just want to show you the learning path uh is really important that actually you understand that some of the machine learning concepts like hyper parameter tuning so some of the conceptual things that we use through that so i couldn't find it here but you should have a good understanding on working with the python code about the creating the pipeline and some things about hyper parameter tuning some of the things about kind of the algorithm side about the working with the python so i totally recommend follow the this learning path again i can send that one through the chat so you can have a link through that and of course for the each exam so besides following up everything it's good to check the some of the questions that even microsoft has some questions sample questions because exam is be different from what we do in the daily job you know it's a bit they look at it in a different aspect so it's good to have some example of doing that any any other things from you eve do you have any comments on that about preparing not really but only that that this uh maybe this dp 100 only includes the basics of azure ml workspace so it's that's what that's one yes yeah ai 900 you mean ai 900 one yeah no i am actually talking about this uh this data scientist exam that also includes the azure ml yes uh yeah okay so because we have three exam you have ai 900 that's a fundamental we have dp100 that is this one is more about hyper parameter tuning something like this and the ai 100. we also have a question here from wow who asked can you suggest the best practice on how to create the data refresh refresh automation process so for that one is actually i can say data factory is a good option to use that also we have azure cleanup so definitely check that one so azure synapse is also connected to the azure ml these days so that's a really good orchestration through them okay do we have questions that we haven't answered no then i think we've actually answered all the questions here yeah so i think the uh i find really good guys now how about you are you find the learning path really good how has your experience with the learning yes yes i i do felt very comfortable when i went to the okay it's it's always different when you go to an exam and you first start with studying on these learning paths and then possibly make some practice exams or something like that but then when you actually go to the exam it's just you can't fully prepare for an exam like this it's a it's often these use cases are uh very specific and and you have to make your own decisions in the end for these exams so it is good to start with these lectures but it's also a good idea to to literally go and try out these tools and understand what is uh happening if you make any changes with your uh code or with the designer or so on yes yes definitely so isn't uh yeah definitely experienced uh as mentioned it is really important through that so yeah just don't stick with that because it's kind of um yeah it's some parts of that so yeah it's good to have some practice okay cool so i think there's no other question or other there's other questions can you see i couldn't see any no more questions so far no you answered i think these questions yeah i can't see the comments so last time i didn't see that but this time i checked the comments unless i learn but yes thank you lila it was really cool to have you today and you presented an amazing thing again so i am very happy and i hope our audience also enjoyed the session today so i just stopped sharing thank you and i think um i think with that we can say a big big big thank you to our or all of the contributors all of you all of the all of the uh speakers we had in this year and um all the love that you gave us like the from the audience and from the speakers and contributors as well this year was like a blast really we had how many sessions again hawking uh this is our 21st session 21st yes so it's amazing and and we can't wait to come back next year and continue this journey we have amazing sessions coming up next year as well so i hope you will come and join us to learn more about the tools and services that are available to make great data and ai solutions yes so i think with that said we just want to wish everyone uh merry christmas and a happy new year and we hope to see you soon again in january thank you and thanks so much for having me and happy new year to everyone thank you thank you bye bye happy holidays [Music] you
Info
Channel: CSharp TV
Views: 226
Rating: undefined out of 5
Keywords:
Id: D-rvnQGdH8c
Channel Id: undefined
Length: 67min 47sec (4067 seconds)
Published: Wed Dec 15 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.