Azure AI Fundamentals Certification 2024 (AI-900) - Full Course to PASS the Exam

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey this is Andrew Brown and I'm bringing you another certification course and this time it's the Azure AI fundamentals also known as the AI 900 and if you're looking to pass a certification we have everything that you need here such as Labs lectures and a free practice exam so you can go ASAT exam get that uh certification put on your resume and Linkedin to go get that job you've been looking for um if you want to support more free courses like this one the best way is to purchase the additional uh paid materials where you can get access uh to more practice exams and other resources if you don't know me um I've taught a bit of everything uh here on the cloud that's been with adabs Azure gcp devops terraform kubernetes you name it I've taught it but uh you know you know the drill here let's get into it and learn more about the Azure AI fundamentals hey this is Andrew Brown from exam Pro and we are at the start of our journey here learning about the AI 900 asking the most important question which is what is the AI 900 so the Azure AI fundamental certification is for those seeking an ml role such as AI engineer or data scientist the certification will demonstrate if a person can Define and understand Azure AI services such as competive services and Azure applied AI Services AI Concepts knowledge mining responsible AI basics of NL pipelines classical ml models autom ml generative AI workloads which is newly added content and Azure AI studio so you don't need to know super complicated ml knowledge here but it definitely helps to get you through there so this certification is generally referred to by its course code the AI 900 and it's the natural path for the Azure AI engineer or Azure data scientist certification this generally is an easy course to pass and it's great for those new to cloud or ml related technology looking at our road map you might be asking okay well what are the paths and what should I learn first so here are a few suggested routes if you already have your a900 that's that's a great starting point before you take your AI 900 if you don't have your a z900 you can jump right into the AI 900 but I strongly recommend you go get that a z900 because it gives you General foundational knowledge it's just another thing that you should not have to worry about which is just how to use Azure at a fundamental level do you need the dp900 to take the AI 900 no but a lot of people seem to like to go this route where they want to have that data Foundation before they move on to the AI 900 because they know that the broad knowledge is going to be useful they so it's app pairing that you see a lot of people getting the AI 900 and the dp900 together for the AI 900 the path is a little bit more clear it's either going to be data scientists or AI engineer so for the AI engineer you have to know how to use the AI services in and out for data scientists it's more focused on setting up actual pipelines and things like that within Azure machine learning so you just have to decide which path is for you the data scientist is definitely harder than the AI engineer so if you aren't ready for the data science some people like taking the AI engineer first and then doing the data scientists so this is kind of like a warmup again it's not 100% necessary but it's just based on your personal learning style and a lot of times people like to take the data engineer after the data scientists just to round out their complete knowledge now if you already have the a900 and the administrator associate you can safely go to the data scientist if you want to risk it because this one is really hard so if you've passed the a1004 you know you're going to probably have a lot more confidence learning up about all the concepts at this level here but of course it's always recommended to go grab these foundational CTS because sometimes course materials just do not cover the information and so the obvious stuff is going to get left out okay so moving forward here how long should you study to pass for the AI 900 if you're entirely new to ml Ai and Cloud providers such as Azure you should anticipate dedicating around 15 hours to grasp the basics this estimate can vary base on your familiarity with these concepts for complete beginners the time commitment might extend to 20 to 30 hours for the intermediate level so people that have passed the a900 or dp900 you're looking at around 8 to 10 hours if you have one or more years of experience with Azure or another cloud service provider like a WS or gcp you're looking at about 5 hours or less the average stunny time is about 8 hours this is where you should be committing 50% of the time to the lecture in labs and 50% for the practice exams the recommended study time is 30 minutes to an hour a day for 14 days this should get you through it but just don't overstudy and just don't spend too little time what does it take to pass the exam well you got to watch the lectures and memorize key information do Hands-On labs and follow along with your own Azure account I'd say that you could probably get away with just watching all the videos in this one without having to do the labs but again it really does reinforce that information if you do take the time there is some stuff that is in Azure AI Studio or machine learning you might be wary of launching instances because we do have to run instances and they will cost money unless you delete the instances after use resulting in very small costs so if you feel that you're not comfortable with that by just watching the videos you should be okay but when you get into the associate tier you absolutely have to expect to pay something to learn and take that risk you want to do paid online practice exams that simulate the real exam as I've mentioned before I do provide a free practice exam and have paid practice exams that accompany this course that are on my platform exam Pro and that's how you can help support more of these free courses so can you pass this certification without taking a practice exam well azzure is a little bit harder if this isn't a WS exam I would say yes but for Azure exams like AI 900 dp900 and sc900 probably not it's kind of risky I think you should do at least one practice exam or go through the sample one there's a sample one probably laying around on the Azure website let's take a look at the exam guide breakdown here and then in the following video we'll look at in more detail so it's broken down into the following domain so the exam has five domains of questions and each domain has its own waiting which determines how many questions in a domain that will show up so 15 to 20% will be described Ai workloads and considerations 20 to 25% will consist of describe fundamental principles of machine learning on Azure 15 to 20% will consist of described features of computer vision workloads on Azure 15 to 20% will be described features of natural language processing workloads on Azure and 15 to 20% will be described features of generative AI workloads on Azure I want you to notice it's says describe these domains this is good because that tells you it's not going to be super hard if you start seeing things that say Beyond describe and identify then you know it's going to be a bit harder so where do you take this exam well you can take it in person at a test center or online from the convenience of your own home so there's two popular test centers there's CER aort and there's Pearson view you can also take it at a local test center if there are nearby locations the term Proctor means a supervisor or person that is monitoring you while you're taking the exam if I had the the option between in person or online I would always choose the in person because it's a controlled environment and it's way less stressful online there are many things that can go wrong but it's up to your personal preference and your situation the passing grade here is 700 out of a th000 so that's around 70% I would say around because you could possibly fail with 70% because these things work on scaled scoring for response types there's about 37 to 47 questions and you can afford to get about 10 to 13 questions wrong so some questions are worth more than one point some questions cannot be skipped and the format of questions can be multiple choice multiple answer drag and drop and hot area there shouldn't be any case studies for foundational level exams and there's no penalty for wrong questions so for the duration you get 1 hour that means about 1 minute per question the time for this exam is 60 Minutes your C time is 90 minutes C time refers to the amount of time that you should take to allocate for that exam so this includes time to review the instructions read and accept the NDA complete the exam and provide feedback at the end this certification is going to be valid forever and it does not expire Microsoft fundamental certifications such as the a900 or ms9900 do not expire as long as the technology is still available or relevant so we'll proceed to the full exam guide [Music] now hey this is Andrew Brown from exam Pro and what we've pulled up here is the official exam outline on the Microsoft website if you want to find this year s you just have to type in AI 900 Azure or Microsoft you should be able to easily find it the page looks like this what I want you to do is scroll on down because we're looking for the AI 900 set of guide and from there we're going to scroll on down to the skills measured section and you might want to bump up the text Azure loves updating their courses with minor updates that don't generally affect the outcome of the study here but it does get a lot of people worried because they always say well is your course out of date so no they're just making minor changes because they'll do this like five times a year and so if there was a major revision what would happen is they would change it so instead of being the AI 900 it would be like the AI 9001 or 9002 similar to how the AI 102 was previously AI 100 but now it's the AI 102 so just watch out for those and if it's a major revision then yes it would probably need a completely new course so there aren't any major changes with the new update other than the update for the generative AI workloads on Azure section A couple of name changes and a few things being removed everything else remains is relatively the same with very minor changes so the concepts and such are still up to date overall I think the exam is easier than the four so let's go through some of the topics and work our way through here so describe Ai workloads and considerations so here we're just kind of describing the generalities of AI so content moderation workloads involve filtering out inappropriate or harmful content from user generated inputs ensuring a safe and positive user experience personalization workloads analyze user behavior and preferences to tailor content recommendations or experiences to individual users computer vision workloads involve the analysis of images and videos to recognize patterns objects faces and actions identify natural language processing knowledge mining document intelligence and features of generative AI workloads note that these are all just Concepts you don't need to know how to use the services at a high level then you have the responsible AI section so Microsoft has these six principles that they really want you to know and they push it throughout all their AI services so those are the six you'll need to know and they're not that hard to learn moving on we have described fundamental principles of machine learning on Azure so here it's just describing regression classification clustering and features of deep learning we have a lot of practical experience with these in the course so you will understand at the end what these are used for next we have core machine learning Concepts we can identify features and labels in a data set so that's the data labeling service describe how training validation data sets are used in machine learning so we'll touch on that describe capabilities of Automated machine learning automl simplifies building and picking the best models while data and compute Services provide the power you need for training with Azure machine learning it helps with managing and deploying your models letting you put your machine learning projects into action smoothly under computer vision workloads we have image classification object detection optical character recognition facial detection and facial analysis Solutions next we have Azure AI Vision Azure AI face detection and Azure AI video indexer the Azure AI Services Encompass a wide range of tools designed to facilitate the development of intelligent applications these Services used to be called computer vision custom vision face service and form recognizer but have Evol or been grouped under broader service categories to streamline their application and integration into projects for NLP we have key phrase extraction entity recognition sentiment analysis language modeling speech recognition synthesis this one doesn't really appear much it's kind of a concept not so much something we have to do and then there's translation so now we have Azure tools and services for NLP workloads these include the Azure AI language service Azure AI speech service and Azure AI translator service these used to be separate Services I believe like the text analytics service Lewis speech service and translator text service but they have been added to the Azure AI umbrella of AI services and now we'll be moving on to the generative AI workloads on Azure we'll be covering features of generative AI models common scenarios for generative Ai and responsible AI considerations for generative Ai and also some of the cool features that Azure open ey service has to offer such as natural language generation code generation and image generation so that's about a general breakdown of the AI 900 exam [Music] guide hey this is angrew Brown from exam Pro and we are looking at the layers of machine learning so here I have this thing that looks like kind of an onion and what it is it's just describing the relationship between these uh ml terms uh uh related to Ai and we'll just work our way through here starting at the top so artificial intelligence also known as AI is when machines that perform jobs that mimic human behavior so it doesn't describe uh how it does that but it's just the fact that that's what AI is uh one layer underneath we have machine learning so machines that get better at a task without explicit programming uh then we have deep learning so these are machines that have an artificial neural network inspired by the human brain to solve complex problems and if you're talking about someone that actually assembles either ml or or deep learning uh models or algorithms that's a data scientist so a person with multi-disciplinary skills and math statistics predictive modeling machine learning to make future predictions so what you need to understand is that AI is just the outcome right and so AI could be using ml underneath or deep learning or a combination of both or just IFL statements okay [Music] all right so let's take a look here at the key elements of AI so AI is the software that imitates human behaviors and capabilities and there are key elements according to Azure or Microsoft as to what makes up AI so let's go through this list quickly here so we have machine learning which is the foundation of an AI system that can learn and predict like a human you have anomaly detection so detect outliers or things out of place like a human computer vision be able to see like a human natural language processing also known as NLP be able to process human languages in referr contexts you know like a human at conversational AI be able to hold a conversation with a human so you know I wrote here according to Microsoft and Azure because you know the the global definition is a bit different but I just wanted to put this here because I've definitely seen this as an exam question and so we're going to have to go with azure's definition here okay let's define what is a data set so a data set is a logical grouping of units of data that are closely related to or share the same data structure and there are publicly available data sets that are used in uh learning of Statistics data analytics and machine learning I just want to cover a couple here so the first is the mnist database so images of handwritten digits used to test classify cluster image processing algorithms commonly used when learning uh how to build computer vision ml models to translate handwritten into or handwriting into digital text so it's just a bunch of handwritten uh numbers and letters and then another very popular data set is the common objects in context Coco data set so this is a data set which contains many common images using a Json file Coco format that identify objects or segments within an image uh and so this data set has a lot of stuff in it so object segmentations recognition and it contexts super pixel stuff segmentation they have a lot of images and a lot of objects uh so there's a lot of stuff in there so why am I talking about this and in particular Coco data sets well when you use um Azure machine Learning Studio it has a DAT data labeling service and um the thing is is that uh it can actually export out into Coco formats that's why I wanted you to get exposure to what Coco was and the other thing is is that when you're building out Azure machine learning uh pipelines you uh they actually have open data sets which we'll see later in the course um that shows you that you can just use very common ones and so uh you might see mest and uh the other one there uh so I just wanted to get you some exposure [Music] okay let's talk about data labeling so this is the process of identifying raw data so images text files videos and adding one or more meaningful and informative labels to provide context so a machine learning model can learn so with supervised machine learning labeling is a prerequisite to produce training data and each piece of data will generally be labeled by a human the reason why I say generally here is because with azure's uh data labeling Service uh they can actually do ml assisted labeling uh so with unsupervised machine learning labels will be produced by the machine and may not be human readable uh and then one other thing I want to touch on is the term called Ground truth so this is a proper u a properly labeled data set that you can use as the objective standard to train and assess a given model is often called Ground truth the accuracy of your train model will depend on the accuracy of your ground Truth Now using um azures tools I never seen use the word ground truth I see that a lot in AWS and even this graphing here is from AWS but uh I just want to make sure you are familiar with all that stuff [Music] okay let's compare supervised unsupervised and reinforcement learning starting at the top we got supervised learning this is where the data has been labeled for training and it's considered Tas driven because you are trying to make a prediction get a value back so when the labels are known and you want a precise outcome when you need a specific value returned and so you're going to be using classification and regression in these cases for unsupervised learning this is where data that has not been labeled uh the ml model needs to do its own labeling this is considered data driven it's trying to recognize a structure or a pattern and so this is when the labels are not known and the outcome does not need to be prise when you're trying to make sense of data so you have clustering dimensionality reduction and associ if You' never heard this term before the idea is it's trying to reduce the amount of Dimensions to make it easier to work with the data so make sense of the data right uh we have reinforcement learning so this is where there is no data there's an environment and an ml model generates data uh and and makes many attempts to reach a goal so this is considered uh decisions driven and so this is for game AI learning tasks robot navigation when you've seen someone code a video game that can play itself that's what this is if you're wondering this is not all the types of machine learning uh and these are in specific unsupervised and supervised is considered classical machine learning because they he heavily rely on statistics and math to produce the outcome uh but there you [Music] go so what is a neural network well it's often described as mimicking the brain it's a neuron or node that represents an algorithm so data is inputed into a neuron and based on the output the data will be passed to one of many connected neurals the connections between neurons is weighted I really should have highlighted that one that's very important uh the network is organized into layers there will be an input layer uh one to many hidden layers and an output layer so here's an example of a very simple neural network notice the NN a lot of times you'll see this in ml as an abbreviation for neural networks and sometimes neural networks are just called neural Nets so just understand that's the same term here what is deep learning this is a neural network that has three or more hidden layers it's considered deep learning because at this point it's uh it's not human readable to understand what's going on with within those layers what is forward feed so neural networks where they have connections between nodes that do not form a cycle they always move forward so that just describes uh a a forward pass through the network you'll see fnn which stands for forward feed neural network just to describe that type of network uh then there's back back propagation which are in forward feed uh networks this is where we move backwards through the neural net adjusting the weights to improve the outcome on next iteration this is how a neural net learns the way the back propagation knows to do this is that there's a loss function so a function that compares the ground truth to the prediction to determine the error rate how bad the network performs so when it gets to the end it's going to perform that calculation and then it's going to do its back propagation and adjust the weights um then you have activation functions I'm just going to uh clear this up here so activation functions uh they're an algorithm applied to a hidden layer uh node that affects connected output so for this entire hidden layer they'll all have the same uh one here and it just kind of affects uh how it learns and like how the waiting works so it's part of back propagation and just the learning process there's a concept of D so when the next layer increases the amount of nodes and you have spars so when the next layer decreases the amount of nodes anytime you see something going from a dense layer to a sparse layer that's usually called dimensional dimensionality reduction because you're reducing the amount of Dimensions because the amount of nodes in your network determines the dimensions you have okay [Music] what is a GPU well it's a general processing unit that is specially designed to quickly uh render high resolution images and videos concurrently gpus can perform parallel operations on multiple sets of data so they are commonly used for non-graphical tasks such as machine learning and scientific computation so a CPU has an average of four to 16 processor cores a GPU can have thousands of processor cores so something that has 408 gpus could have as many as 40,000 cores here's an image I grabbed right off the Nvidia website and so it really illustrates very well uh like how this would be really good for machine learning or U neural networks because no networks have a bunch of nodes they're very repetitive tasks if you can spread them across a lot of cores that's going to work out really great so gpus are suited uh for repetitive and highly paralleled Computing tasks such as rendering Graphics cryptocurrency mining deep learning and machine [Music] learning we're talking about Cuda before we can let's talk about what Nvidia is so Nvidia is a company that manufactures graphical processor units for gaming and professional markets if you play video games you've heard of Nvidia so what is Cuda it is the compute unified device architecture it is a parallel Computing platform in API by Nvidia that allows developers to use Cuda enable gpus for general purpose Computing on gpus so gpg all major deep learning Frameworks are integrated with Nvidia deep uh learning SDK the Nvidia uh deep learning SDK is a collection of Nvidia libraries for deep learning one of those libraries is the Cuda deep neural network library so CNN so Cuda or CNN provides highly tuned implementations for standard routines such as forward and back uh convolution convolution is really great for um uh uh computer vision pooling normalization activation layers uh so you know in the Azure certification uh for the AI 900 uh they're not going to be talking about Cuda but if you understand these two things you'll understand why gpus uh really matter [Music] okay all right let's get a uh easy introduction into machine learning pipeline so this one is definitely not an exhaustive one and we're definitely going to see more complex ones uh throughout this course but let's get to it here so starting on the left hand side we might start with data labeling this is very uh important when you're doing supervis learning because you need to label your data so the ml model can learn by example during training uh this stage and the feature engineering stage are is considered pre-processing because we are preparing our data to be trained for the model uh when we move on to feature engineering the idea here is that ml models can only work with numerical data so you'll need to translate it into a format that it can understand so extract out the important data that the ml model needs to focus on okay uh then there's the training steps so your model needs to learn how to become smarter it will perform multiple iterations getting smarter with each iteration uh you might also have a hyperparameter tuning uh step here it says tunning but it should say tuning um but the ml model can have different parameters so you can use ml to try out many different parameters to optimize the outcome when you get to deep learning it's impossible to tweak the parameters by hand so you have to use hyperparameter tuning then you have serving sometimes known as deploying uh but you know when we say deploy we talk about the entire pipeline not necessarily just the the ml model step so we need to make an ml model accessible so we serve it by hosting in a virtual machine or container uh when we're talking about Azure um machine learning it's either going to be an Azure kubernetes service or Azure container instance and you have uh inference so inference is the active request uh of requesting to make a prediction so you send your payload with either CSV or whatever and you get back the results you have a real time endpoint and batch processing so real time is just they can batch can be real as well but generally it's slower but the idea is that do I am I making a single item prediction or am I giving you a bunch of data at once and again this is a very simplified ml pipeline I'm sure we'll revisit ml pipeline later in this [Music] course so let's compare the uh the terms forecasting and prediction so forecasting you make a prediction with relevant data it's great for analysis of Trends uh and it's not guessing and when you're talking about prediction this is where you make a prediction without relevant data you use statistics to predict future outcomes it's more of guessing and it uses decision Theory so imagine you have a bunch of data and the idea is you're going to infer from that data okay maybe it's a maybe it's B maybe it's C and for prediction you don't have really much data so you're going to have to uh kind of invent it and the idea is that you'll figure out what the outcome is there these are extremely broad terms but you know just so you have a highle view of these two things okay so what are performance or evaluation metrics well they are used to evaluate different machine learning algorithms the idea is uh you know when your machine learning makes a prediction these are the metrics you're using to evaluate to determine you know is your ml model working as you intended so for different types of problems different metrics matter this is absolutely not an exhaustive list I just want you to get you exposure to these uh words and things uh so that when you see them you go okay I'll come back here and refer to this uh but lots of these you're just it's not it's not necessarily to remember but classification metrics you should know so classification we have accuracy precision recall F1 score rock and a for regression metrics we have MSE R msce Mae ranking metrics we have MMR dcg and dcg statistical metrics we have correlation computer vision metrics we have psnr ssim IOU NLP metrics we have perplexity blue medor Rogue deep learning related metrics we have Inception score I cannot say this person's name but I'm assuming it's a person but uh this Inception distance and there are two categories of evaluation metrics we have internal evaluations so metrics used to evaluate the internals of an ml model so accuracy F1 score Precision recall I call them the famous four using all kinds of uh models and uh external evaluation metrics used to evaluate the final prediction of an ml model so yeah uh don't get too worked up here I know that's a lot of stuff uh the ones that matter we will see again again [Music] okay let's take a look at Jupiter notebook so these are web-based applications for authoring documents that combin live code narrative text equations visualizations uh so if you're doing data science or you're building ml models you absolutely are going to be working with jupyter notebooks they're always integrated into uh cloud service providers ml tools um uh so jupyter notebook actually came about from IPython so IPython is the precursor of it and they extracted that feature out it became jupyter notebook I IPython is now a kernel uh to run uh python so when you execute out python code here it's using IPython which is just a version of python uh jupyter notebooks were overhauled and better integrated into an IDE called Jupiter Labs which we'll talk about here in a moment and you generally want to open notebooks in Labs the Legacy webbased interface is known as Jupiter classic notebooks so this is what the old one looks like you can still open them up but everyone uses Jupiter Labs now okay so let's talk about Jupiter labs jupyter labs is the next generation web-based user interface all familiar features of the classic Jupiter notebook uh is in a flexible powerful user interface it has notebooks a terminal a text editor a file browser Rich outputs Jupiter Labs will eventually replace the classic uh Jupiter notebooks so there you [Music] go we keep mentioning regression but let's talk about it in uh more detail here so we kind of understand the concept so regression is the process of finding a function to equate a labeled data set notice it says labeled that means it's going to be for supervised learning into a continuous variable number so another way to say it is predict this variable in the future so the future is just means like that continuous variable doesn't have to be time but that's just a good example of regression so what will the temperature be next week so will it be 20 Celsius how would we determine that well we would have vectors so dots that are plotted on a graph that has multiple Dimensions the dimensions could be greater than just X and Y you could have uh many uh and then you have a regression Line This is the line that's going through our data set and uh and that's going to help us uh figure out um how to predict the value so how would we do that well we would need to calculate the distance of a vector from the regression line which is called an error and so different regression algorithms use uh the error to predict different variable future variables so just to look at this graphic here so here is our regression line and here is a a DOT like a a vector a piece of information and this distance from the line the the actual distance is what we're going to use in our ml model to figure out if we were to plot another line up here right you know we would compare this line to all the other lines okay and that's how we'd find similarity and what we'll commonly see for this is mean squared error root mean squared error mean absolute error so MSE mrse and Mae [Music] okay let's take a closer look at the concepts of classification so classification is the process of finding a function to divide a labeled data set so again this is supervised learning into classes or categories so predict a category to apply to the inputed data so will it rain next Saturday will it be sunny or rainy so we have our data set and the idea is we're drawing through this a classification line to divide the data set so we're regression we're measuring the line two or the the the vectors to the line and this one it's just what side of the line is it on if it's on this side then it's sunny if it's on this side it's rainy okay for classification algorithms we got log logistic regression decision trees random forests neural networks uh naive Bays K nearest neighbor also known as knnn and support Vector machines svms [Music] okay let's take a closer look at clustering so clustering is the process of grouping unlabeled data so unlabeled data means it's unsupervised learning based on similarity and differences so the outcome could be group data based on similarities or differences I guess it's the same description up here uh but imagine we have a graph and we have data and the idea is we draw boundaries around that to see uh similar groups so maybe we're recommending purchases to Windows computers or recommending purchase to Mac computers now remember this is unlabeled data so the label is being inferred or um or they're just saying these things are similar right so clustering algorithms we got K means k medoids a density Bas hierarchial [Music] okay hey this is Andrew Brown from exam Pro and we're looking at the confusing Matrix and this is a table to visualize the model predictions the predicted versus the ground truth labels the actual also known as an error Matrix and they are useful for classification problems to determine if our um if our classification is working as we think it is so imagine we have a question how many bananas did this person eat or these people eat and so we have this kind of a box here where we have predicted versus actual and it's really comparing the ground truth and what the model predicted right and so on the exam they'll ask you questions like okay well imagine that uh and they might not even say yes or no maybe like zero and one and so what they're saying is you know imagine you have you want to tell us the true positives right and so the idea is they won't show you the labels here but you know one and one would be a true positive and zero and Z would be a false negative okay another thing they'll ask you about these uh confusion matrixes is uh the size of them so the idea is that we're looking right now at a um oops just going to erase that there but we are looking at a binary classifier because we have one label and uh uh just two labels right one and two okay but you could have three say one two and three so how would you calculate that well there would just be a third cell over here uh you know and it's just going to be actual predicted because we're only going to have ground Truth Versus prediction and so that's how you'll know it will be six the size will be six might not say cells but it'll just say six [Music] okay so to understand anomaly detection let's define quickly what is an anomaly so an abnormal thing that is marked by deviation from the norm or standard so anomaly detection is the process of finding outliers within a data set called an anomaly so detecting when a piece of data or access pattern appear suspicious or malicious so use cases for Nom detection can be data cleaning intrusion detection fraud detection system Health monitoring event detection and sensory or sensor networks ecosystem disturbances detection of critical and cascading flaws Anomaly detections by hand is a very tedious process so using ml for anomaly detection is more efficient and accurate and Azure has a service called anomaly detector detects anomalies and data to quickly find uh quick identify and troubleshoot [Music] issues so computer vision is when we use machine learning neural networks to gain high level understanding of digital images or videos so for computer vision deep learning algorithms we have convolutional neural networks these are for image and video recognition they're inspired after how the human eye actually processes information and sends it back to the brain to be processed you have recurrent neural networks rnns which are generally used for handri recognition or speech recognition of course these algorithms have other applications but these are the most common use cases for them for types of computer vision we have image classification so look at an image or video and classify its place in a category object detection so identify objects within an image or video and apply labels and location boundaries semantic segmentation so identify segments or objects by drawing pixel masks around them so great for objects and movement image analysis so analyze uh an image or video to apply descriptive context uh labels so maybe an employee is sitting at a desk in Tokyo would be uh something that image analysis would do optical character recognition or OCR find texts in images or videos and extract them into digital text for editing facial detection so detect faces in a photo or video and draw a location boundary uh and label their expression so for computer vision to some things around Azure or Microsoft Services there's one called seeing AI it's an AI app developed by Microsoft for iOS and so you use your device camera to identify OB uh people and objects and the app is audibly describes those objects for people with visual impairments it's totally free if you have an IOS app I have an Android phone so I cannot use it but I hear it's great some of the Azure computer vision service offering is computer vision so analyze images and videos extract descriptions tags objects and texts custom Vision so custom uh image classification object detection models using your own images face so detect and identify people uh emotions and images form recognizer so translate scan documents into key value or tabular editable [Music] data so natural language processing also known as NLP is machine learning that can understand the context of a corpus Corpus being a body of related text so nlps enable you to analyze and interpret text within documents and email messages interpret or contextualize spoken tokens so for example maybe customer sentiment analysis whether customer is happy or sad synthesize speech so a voice assistant uh assistant talking to you automatically translates spoken or written phrases and sentences between languages interpret spoken or written commands and determine appropriate actions a very famous example for a voice assistant specifically or virtual assistant for Microsoft is Cortana uh it uses the Bing search engine to perform tasks such as setting reminders and answering questions for the user uh and if you're on a Windows 10 machine uh it's very easy to activate Cortana by accident uh when we're talking about azures MLP offering we have text and analytics so sentiment analysis to find out what customers think find topic uh topic relevant phrases using key phrase extraction identify the language of the text with the language detection detect and categorize entities in your text with named entity recognition for translator we have real-time text translation multilanguage support uh for speech service we have transcribe audible speech into readable searchable texts and then we have language understanding also known as Lewis uh natural language processing service that enables you to understand human language in your own application website chatbots iot device and more when we talk about conversational AI it usually generally uses NLP so that's where you'll see that overlap next [Music] okay let's take a look here at conversational AI which is technology that can participate in conversations with humans so we have chat Bots voice assistants and interactive voice recognition systems which is like the the second version to interactive voice response system so you know when you call in and they say press these numbers that is a response system and a recognition system is when they can actually take human uh Speech and translate that into action so the use cases here would be online customer support replaces human agents for uh for replying about customer FAQs maybe shipping questions anything about customer support accessibility so voice operate UI for those who are uh visually impaired HR processes so employee training onboarding updating employee information I've never seen it used like that but that's what they say as a use case Healthcare accessible affordable healthare so maybe you're doing a claim process I've never seen this but maybe in the US where you do more your claims and everything is privatized it makes more sense Internet of Things So iot devices so Amazon Alexa Apple Siri Google home and I suppose Cortana but it doesn't really have a particular device so that's why I didn't list it there computer software so autocomplete search on phone or desktop so that would be Cortana something it could do uh for the two services that are around conversation AI for Azure we have Q&A maker so create a conversational question and answer bot from your existing content also known as a knowledge base and Azure bot service intelligent serverless bot service that scales on demand used for creating publishing managing bots so uh the idea is you make your Bot here and then you deploy it with this [Music] okay let's take a look here at responsible AI which focuses on ethical transparent and accountable uses of AI technology Microsoft put into practice responsible AI via its six Microsoft AI principles this whole thing is invented by Microsoft uh and so you know it's not necessarily a standard but it's something that Microsoft is pushing hard to uh have people adopt okay so we the first thing we have is fairness so this is an AI system which should treat all people fairly we have reliability and safety an AI system should perform reliably and safely privacy and security AI system should be secure and respect privacy inclusiveness AI system should Empower everyone and engage people transparency AI systems should be understandable accountability people should be accountable for AI systems and we need to know these in uh uh greater detail so we're going to have a a short little video on each of these [Music] okay the first on our list is fairness so AI systems should treat all people fairly so an AI system can reinforce existing social social stere uh stereotypical bias can be introduced uh during the development of a pipeline so an A system that are used to allocate or withhold opportunities resources or information uh in domains such as criminal justice employee employment and hiring finance and credit so an example here would be an ml model designed to select a final applicant for hiring pipeline without incorporating any bias based on gender ethnicity or may result in unfair Advantage so Azure ml can tell you how each feature can influence a model's prediction for bias uh one thing that could be of use is fair learn so it's an open source python project to help data science is to improve fairness in the AI systems at the time of I made this course a lot of their stuff is still in preview so you know it's the fairness component is is not 100% there but it's great to see that they're getting that along okay so we are on to our second AI principle for Microsoft and this one is AI systems should perform reliably and safely so AI software must be rigorously tested to ensure they work as expected before release to the end user if there are scenarios where AI is making mistakes it is important to release a report Quantified risks and harms to end users so they are informed of the shortcomings of an AI solution something you should really remember for the exam they'll definitely ask that AI wear concern uh for reliability safety for humans is critically important autonomous vehicles a health diagnosis a suggestion prescriptions and autonomous weapon systems they didn't mention this in their content and I was just like doing some additional research research I'm like yeah you really don't want mistakes when you have automated weapons or ethnically you shouldn't have them at all but hey that's uh that's just how the world works but yeah this is this category [Music] here we're on to our third Microsoft AI principle AI system should be secure and respect privacy so AI can require vast amounts of data to train deep machine ml models the nature of an ml model may require personally identifiable information so piis uh it is important that we ensure protection of user data that it is not leak or disclosed in some cases ml models can be run locally on a user's device so their uh piis remain on their device avoiding the the vulnerability this is called this is like Edge Computing so that's the concept there AI security principles to malicious actors so data origin and lineage data use internal versus external data corruption considerations anomaly detection so there you [Music] go we're on to the fourth Microsoft AI principle so AI systems should Empower everyone and engage people if we can design AI Solutions for the minority of users they can design a solution for the majority of users so when we're talking about minority groups we're talking about physical ability gender sexual orientation ethnicity other factors this one's really simple uh in terms of practicality it doesn't 100% make sense because if you've worked with um uh groups that are deaf and blind developing technology for them a lot of times they need specialized Solutions uh but the approach here is that you know if we can design for the minority we can design for all that is uh the principle there so that's what we need to know okay let's take a look here at transparency so AI systems should be understandable so interpretability and intelligibility is when the end user can understand the behavior of UI so transparency of AI systems can result in mitigating unfairness help developers debug their AI systems ging more trust from our users those build a those who build AI systems should be open about why they're using AI open about the limitations of the AI systems adopting an open source AI framework can provide transparency at least from a technical perspective on the internal workings of an AI system we are on to the last Microsoft AI principle here people uh should be accountable for AI systems so the structure put in place to consistently enacting AI principles and taking them into account AI systems should work within Frameworks of governments organizational principles ethical and legal standards that are clearly defined principles guide Microsoft and how they develop sell and Advocate when working with third parties and this push towards regulation towards a principles so this is Microsoft saying hey everybody adopt our model um there aren't many other model so I guess it's great that Microsoft is taking the charge there I just feel that it needs to be a bit more welldeveloped but what we'll do is look at some more practical examples so we can better understand how to apply their principles okay so if we really want to understand how to apply the Microsoft AI principles they've great created this nice little tool via a free web app for practical scenarios so they have these cards you can read through these cards they're color coded for different scenarios and there's a website so let's go take a look at that and see what we can learn [Music] okay all right so we're here on the guidelines for human AI interaction so we can better understand the uh how to put into practice the Microsoft AI principles they have 18 cards and let's work our way through here and see the examples the first one our list make clear what the system can do help the users understand what the AI system is capable of doing so here PowerPoint quick start Builders an on uh Builds an online outline to help you get started researching a subject it display uh suggested topics that help you understand the features capability then we have the Bing app shows examples of types of things you can search for um Apple watch displays all metrics it tracks and explains how going on the second card we have make clear how well the system can do what it can do so here we have office new uh companion experience ideas dock alongside your work work and offers one-click assistance with grammar design Data Insights richer images and more the unassuming term ideas coupled with label previews help set expectations and presented suggestions the recommender in apple music uses language such as we will think you'll like to communicate uncertainty the help page for Outlook web mail explains the filtering into focused and other and we'll start working right away but we'll get better with use making clear the mistakes uh will happen and you teach the product and set overrides onto our red cards here we have time Services based on context time when to act or interrupt based on the user's current task and environment when it's time to leave for appointments Outlook sends a time to leave notification with directions for both driving and public transit taking into account current location event location real-time traffic information um and then we have after using Apple Maps routing it remembers when you're parked your car when you open the app after for a little while it suggests routing to the location of the park car all these Apple examples make me think that Microsoft has some kind of partnership with apple I guess I guess Microsoft or or Bill Gates did own Apple shares so maybe they're closer than we think uh show contextually relevant information time when to act or interrupt based on user's current task and environment powered by Machine learning acronyms in word helps you understand shorthand employed uh in your own work environment relative to current Open document uh on Walmart.com when the user is looking at a product such as gaming console recommends accessories and games that would go with it when a user searches for movies Google shows results including showtimes near the users's location for the current data onto our fifth card here match based uh we didn't we didn't miss this one right yeah we did okay so we're on the fifth one here match relevant social norms ensure experience is delivered in a way the users would expect given the social cultural context when editor identifies ways to improve writing style presents options politely consider using that's the Canadian way being polite uh Google photos is able to recognize pets and use the wording important cats and dogs recognizing that for many pets are an important part of one's family and you know what uh when I uh started renting my new house uh I I said you know is there a problem with dogs and my landlord said well of course pets are part of the family and that was something I like to hear uh Cortana uses semiformal tone apologizing when unable to find a uh contact which is polite and socially appropriate I like that okay mitigate social biases ensure AI system languages and behaviors do not reinforce undesirable unfair stereotypes and biases my anal summarizes how you spend your time at work then suggest ways to work smarter one ways to mitigate bias is by using gender neutral icons to represent important people sounds good to me a Bing search for CEO or doctor shows images of diverse people in terms of gender and an ethnicity sounds good to me the predictive uh keyboard for Android suggests both genders when typing a pronoun starting with the letter H we're on to our yellow cards uh so support efficient invocation so make it easy to invoke or request system Services when needed so flashfill is a helpful timesaver in Excel that can be easily invoked with on canvas interactions and uh that keep you in flow on amazon.com oh hey there got Amazon in addition to the system giving recommendations as you browse you can manually invoke additional recommendations from the recommender for your menu uh design ideas in Microsoft PowerPoint can be invoked uh with the with the Press of a button if needed I cannot stand it when that pops up I always have to tell it to leave me alone okay support efficient dismal uh efficient disle dismissal oh support efficient dismissal okay make it easy to dismiss or ignore or undesired AI system Services okay this sounds good to me Microsoft forms allows you to create custom surveys quizzes polls questionnaires and forms some choices questions trigger suggested options position beneath the relevant question the suggestion can be easily ignored and dismissed Instagram allows the user to easily hide or report ads that have been suggested by AI by tapping the ellipses at the top of the right of the ad Siri can be easily dismissed uh uh by saying never mind I'm always telling my Alexa never mind support efficient uh correction make it easy to edit refine or recover the AI system uh when the when the AI system is wrong so alt Auto alt text automatically generates alt text for photographs by using intelligent services in the cloud descriptions can be easily Modified by clicking the alt text button in the ribbon once you set a reminder with Siri the UI displays a tap to edit link when Bing automatically corrects spelling errors in search queries it provides the option to revert to the query as originally typed with one click on to card number 10 Scope Services when in doubt so engage in disambiguate disambiguation or gracefully degrade the AI system service when uncertain about a user's goal so when Auto replacing word is uncertain of a correction it engages in disambiguation by displaying multiple options you can select from Siri will let you know it has trouble hearing if you don't respond or talk or or speak too softly big Maps will provide multiple routing options when uh when unable to recommend best one we're on to card number 11 make clear why the system did what it did enable users to access an explanation of why the AI system behaved as it did office online recommends docu documents based on history and activity descriptive text above each document makes it clear why the recommendation is shown product recommendations on Amazon .c include why Rec recommended link that shows that what products in the user shopping history informs the recommendations Facebook enables you to access an explanation about why you are seeing each ad in the news feed onto our green cards so remember recent interactions so maintain short-term memory and allow the user to make efficient references to that memory when attaching a file Outlook offers a list of recent files including recently copied file links Outlook also remembers people you have interacted with recently and displays uh them when addressing a new email uh Bing search remembers some recent queries and search can be continued uh conversationally how old is he after a search for kyanu Reeves Siri carries over the context from one interaction to the next a text message is Created from the person you told Siri to message to onto card number 13 lucky number 13 learn from user Behavior personalize the user experience by learning from their actions over time tap on a search bar in office applications and search lists uh the top three commands on your screen that you're most likely to need to personalize the technology called zero query doesn't even need to type in the search bar to provide a personalized predictive answer amazon.com gives personalized product recomm recommendations based on previous purchases onto card 14 update and adapt Cur uh C uh cautiously limit disruptive changes when updating adaptive adapting the systems behaviors so PowerPoint designer improves slides for office 65 subscribers by automatically generating design ideas from to choose from designer has integrated new capabilities such as smart Graphics icon suggestions and existing user experience ensuring the updates are not disruptive office tell office tell me feature shows dynamically recommended items and a designated try area to minimize disruptive changes onto card number 15 encourage granular feedback back enable the users to provide feedback indicating their preferences during regular interactions with the AI system so ideas and Excel empowers you to understand your data through high level visual summaries Trends and patterns encourages feedback on each suggestion by asking is this helpful not only does Instagram provide the option to hide specific ads but it also solicits feedback to understand why the ad is not relevant and Apple's music app love dislike buttons are prominent easily accessible number 16 convey the consequences of us actions immediately update or convey how user actions will impact future behaviors of the AI system you can get stock in G Geographic data types and Excel it is easy as typing text into a cell and converting it to stock data type or geograph geography data type when you perform the conversion action an icon immediately appears in the converted cells uh upon tapping the like dislike button for each recommendation it at in apple music a pop-up informs the user that they'll receive more or fewer similar recommendations onto card number 17 we're almost near the end provide Global controls allow the user to globally customize the system system monitors and how it behaves so editor expands on spelling and grammar checking capabilities of word to include more advanced proofing and editing designed to ensure document is readable editor can flag a range of critique types and allow to customize the thing is is that in word it's so awful spellchecking I don't understand like it's been years and the the spell checking never gets better so they got to emplore better spellchecking AI I think bang search provides settings that impact the the types of results the engine will return for example safe search uh then we have Google photos allows user to turn location history on enough for future photos it's kind of funny seeing like Bing in there about like using AI because at one point it's almost pretty certain that Bing was copying just Google search indexes to learn how to index I don't know that's Microsoft for you uh we're on on to card 18 notify users about changes inform the user when AI system adds or updates his capabilities uh the what's new dialogue in office informs you about changes by giving an overview of the latest features and updates including updates to AI features in Outlook web the help tab includes a what's new section that covers updates so there we go we made it to the end of the list I hope that was a fun listen for you and and there I hope that we could kind of match up the uh the responsible AI I kind of wish what they would have done is actually mapped it out here and said where it match but I guess it's kind of an isolate service that kind of ties in so I guess there we go [Music] okay hey this is Andrew Brown from exam Pro and we are looking at Azure cognitive services and this is a comprehensive family of AI services and cognitive apis to help you build intelligent apps so create customizable pre-trained models built with breakthrough AI researches I put that in quotations I'm kind of throwing some shade at uh Microsoft at Azure just because it's their marketing material right uh deploy cognitive Services anywhere from Cloud to the edge uh with containers get started quickly no machine learning expertise required but I think it it helps to have a bit of background knowledge uh develop with strict ethical standards uh Microsoft loves talking about the responsible um their responsible AI stuff empowering responsible use with industry leading tools and guidelines so let's do a quick breakdown of the types of services in this family so for decision we have anomaly detector identify potential problems early on content moderator detect potentially offensive or unwanted content personalizer create Rich personalized experiences for every user for languages we have language understanding also known as uh Luis Lewis I don't know why I didn't put the initialism there but don't worry we'll see it again build natural language understanding into app spots and iot devices Q&A maker create a conversational question and answer layer over your data text analytics detect sentiment so sentiment is is like whether customers are happy sad glad uh keep phrases and named entities translator detect and translate more than 90 supported languages for speech we have speech to text so transcribe audible um speech into readable search text text to speech convert text to lifelike speech for natural interfaces speech translation so integrate realtime speech translation into your apps uh speaker recognition uh identify and verify uh the People speaking based on audio for vision uh we have computer vision so analyze content in images and videos custom Vision so analyze or sorry customize image Rec image recognition to fit your business needs um pH detect uh and identify people and emotions in images so there you [Music] go so Azure cognitive Services is an umbrella AI service that enables customers to access multiple AI services with an AI key and API endpoint so what you do is you go create a new cognitive service and once you're there it's going to generate out two keys and an endpoint and that is what you're using generally for authentication uh with the various AI services programmatically and that is something that is key to the service that you need to [Music] know so knowledge mining is a discipline in AI that uses a combination of intelligent services to quickly learn from vast amounts of information so it allows organizations to deeply understand and easily explore information uncover hidden insights and find relationships and patterns at scale so we have ingest enrich and explore as our three steps so for ingest content from a range of sources using connectors to First and thirdparty data stores so we might have structured data such as databases csvs uh the csvs would more be semi-structured but we're not going to get into that level of detail unstructured data so PDFs videos images and audio for enrich the content with AI capabilities that let you extract information find patterns and deepen understanding in so cognitive services like Vision language speech decision and search and explore the newly indexed data via search spots existing businesses applications and data visualizations enrich uh structured Data customer relationship management wrap systems powerbi this whole knowledge mining thing is a thing but like I believe that the whole model around this is so that Azure uh shows you how you can use the cognitive services to solve uh things without having to invent new Solutions so let's look at a bunch of use cases that Azure has uh and see what where we can find some in uh useful use so the first one here is for Content research so when organizations uh task employees review and research of technical data it can be tedious to read page after page of dense texts knowledge mining helps employees quickly review these dense materials so you have a document and in the enrichment step you could be doing printed text recognition key phrase extraction sharpener uh sharpen skills technical keyword sanitation format definition minor large scale vocabulary matcher you put it through a search service and now you have search reference library so it makes things a lot easier to work with uh now we have uh audit risk compliance management so developers could use knowledge mining to help attorneys quickly identify entities of importance from Discovery documents and flag important ideas across documents so we have documents so Clause extraction Clause uh classification gdpr risk named uh identity extraction keyphrase extraction language detection automate translation uh then you put it back into a search index and now you can use it a management platform or a word plugin and so we have business Process Management in Industries where bidding competition is fierce or when the diagnosis of a problem must be quick or in near real time companies use knowledge mining to avoid costly mistakes so the client Drilling and uh uh and completion reports document processor AI services and custom models Q for human validation intelligent automation yes send it to a backend system or a data Lake Andor a data Lake and then you do your analytics dashboard then we have customer support and feedback uh analysis so for many companies customer support is costly and in uh efficient knowledge mining can help customer support teams quickly find the right answers for a customer inquiry or assess customer sentiment at scale so you have your Source data you do your document cracking use cognitive skills so pre-trained services or custom uh you have enriched documents from here you're going to do your projections and have a knowledge store you're going to have a search index and then do your analytics something like powerbi uh we have digital assessment management I know there's a lot of these but it really helps you understand how cognitive services are going to be useful uh given the amount of unstructured data created daily many companies are struggling to make use of or find information within their files knowledge mining through a search index makes it easy for end customers and employees to locate what they're looking for faster so you in just like art metadata and the actual images themselves for the top ler we're doing geopoint extractor biographical enricher then down below we're tagging we're custom object detector similar image tagger we put it in a search index they love those search indexes and now you have an art Explorer uh we have contract management this is the last one here many companies create products for multiple sectors hence the business opportunities with different vendors and buyers increase exponentially knowledge mining can help organizations to scour thousands of pages of sources to create accurate bids so here we have RFP documents um this will actually probably come back later in the original set but we will Ex we'll do risk extraction print text recognition keyphrase extraction organizational extraction engineering standards we'll create a search index and put it here this will bring back data also Metadate extraction will come back here and then this is just like a continuous pipeline [Music] okay hey this is Andrew Brown from exam Pro and we are looking at face service and Azure face service provides an AI algorithm that can detect recognize and analyze human faces and images uh such as a face in an image face with specific attributes face landmarks similar faces and the same face as a specific identity across a gallery of images so here is an example of an image uh that I ran that will do in the follow along and uh what it's done is it's drawn a bounding box around the image and there's this ID and this is a unique identifier uh string for each detected face in an image and these can be unique across the gallery which is really useful as well another cool thing you can do is a face landmarks so the idea is that you have a face and it can identify very particular components of it and up to 27 predefined landmarks is what is provided with this face Service uh another interesting thing is face attribut so you can uh check whether they're wearing access accessories so think like earrings or lip rings uh determine its age uh the blurriness of the image uh what kind of emotion is being uh experienced the exposure of the image you know the contrast uh facial hair gender glasses uh your hair in general the head pose there's a lot of information around that makeup which seems to be limited like when we ran it here in the lab all we got back was eye makeup and lip makeup but hey we get some information whether they're wearing a mask uh noise so whether there's artifacts like visual artifacts or occlusion so whether an object is blocking parts of the face and then they simply have a Boolean value for whether the person smileing or not which I assume is a very common component so that's pretty much all we really need to know about the faith service and there you [Music] go hey this is Andrew Brown from exam Pro and we are looking at the speech and translate service so a your's translate service is a translation service as the name implies and it can translate 90 languages and dialects and I was even surprised to find out that it can translate into kingon um and it uses a neural machine translation nmt replace reping its Legacy statistical machine translation smt so what my guess here is that statistical meaning that it used classical machine learning back in 2010 and and then they decided to switch it over to neural networks um which of course would be a lot more accurate uh as your translate service can support a custom translator so it allows you to extend the service for translation based on your business domain use cases so if you use a lot of technical words and things like that then you can fine-tune that or particular phrases then there's the other service Azure speech service and this is a uh a speech uh synthesis Serv service so what can do is speech to text text to speech and speech translation so it's synthesizing creating new voices okay so we have speech to text so real time speech to text batch uh batching Multi-Device conversation conversation transcription and you can create custom speech models then you have text to speech so this utilizes a speech synthesis markup language so it's just a way of formatting it and it can create cust some voices uh then you have the voice assistant so integrates with the bot framework SDK and speech recognition so speaker verification and identification so there you [Music] go hey this is Andrew Brown from exam Pro and we are looking at text analytics and this is a service for NLP so natural language processing for text Mining and text analysis so text analytics can perform sentiment analysis so find out what people think about your brand or topics so features provide sentiment labels such as negative neutral positive then you have opinion minin which is an aspect based sentiment analysis it's for granular information about the opinions related to aspects then you have key phrase extraction so quickly identify the main Concepts in text you have language detections that detect the language uh of an inputed text that it's written in and you have name entity recognition so Neer so identify and categorize entities in your text as people places objects and quantities and subset of ner is personally identifiable information so piis let's just look at a few of these more in detail some of them are very obvious but some of these would help to have an example so the first we're looking at is key phrase extraction so quickly identify the main Concepts and text so keyphrase extraction works best when you have uh when you give it bigger amounts of text to work on this is the opposite of sentiment analysis which performs better on smaller amounts of text so document sizes can be 5,000 or fewer characters per document and you can have up to a thousand items per collection so imagine you have a movie review with a lot of text in here and you want to uh extract out the key phrases so here it identified Borg ship Enterprise um surface travels things like that uh then you have named entity recognition so this detects words and phrases mentioned in unstructured data that can be associated with one or more semantic types and so here's an example I think this is medicine based and so the idea is that it's identifying it's identifying um these words or phrases and then it's applying a semantic type so it's saying like this is a diagnosis this is a medication class and stuff like that uh semantic type could be more broad so there's location event but location twice your person diagnosis age and there is a predefined set I believe that is in um Azure that you should expect but they have a generic one and then there's one that's for health we're looking at sentiment analysis this graphic makes it uh make a lot more sense when we're splitting between sentiment and opinion mining but the idea here is that sentiment analysis will apply labels and confidence scores to text at the sentence and document level and so labels can include negative positive mixed or neutral and we'll have a confidence score ranging from 0 to one and so over here we have a sentiment analysis of this line here and it's saying that this was a negative sentiment but look there's something that's positive and there's something that's negative so was it really negative and that's where opinion mining gets really useful because it has more granular data where we have a subject and we have an opinion right and so here we can see the room was great but the staff was unfriendly negative so we have a bit of a split there [Music] okay hey this is Andrew Brown from exam Pro and we are looking at optical character recognition also known as OCR and this is the process of extracting printed or handwritten text into a digital and editable format so OCR can be applied to photos of street signs products documents invoices bills s Financial reports articles and more and so here's an example of us extracting out nutritional data or nutritional facts off the back of a food product so Azure has two different kinds of apis that can perform OCR they have the OCR API and the read API so the OCR API uses an older recognition model it supports only images it executes synchros synchronously returning immediately when uh it detects text it's suited for for Less texts it supports more languages it's easier to implement and on the other side we have the read API so this is an updated recognition model supports images and PDFs executes asynchronously paralyzes tasks per line for faster results suited for lots of texts supports fewer languages and it's a bit more difficult to implement and so when we want to use this service we're going to be using uh computer vision SDK okay hey this is Andrew Brown from exam Pro and we're taking a look here at form recognizer service this is a specialized OCR service that translates printed text into digital and editable content it PR preserves the structure and relationships of the form like data that's what makes it so special so form recognizer is used to automate data entry in your applications and enrich your document search capabilities it can identify key value pairs selection marks table structures uh it can produce output structures such as original file relationships bounding boxes confidence score and form recognizer is composed of a custom document processing models pre-built models for invoices receipts IDs business cards the model layouts let's talk about the layout here so extract text selection marks table structures along with bounding box coordinates from documents for recognizer can extract text selection marks and table structures the row and column numbers associated with the text using highdefinition optical character enhancement models that is totally useless that text hey this is Andrew Brown from exam Pro and we are looking at form recognizer service and this is a Specialized Service uh for OCR that translates printed text into digital edible content but the magic here is that preserves the structure and relationship of form likee data so there's an invoice you see those magenta lines it's saying identify that form like data so form recogn is used to automate data entry in your applications and enrich your document search capabilities and it can identify key value pairs selection marks table structures and it can output structures such as original file relationships bounding box boxes confidence scores it's composed of a customer Uh custom document processing model pre-built models for invoices receipts IDs and business cards it's based on this layout model uh and there you [Music] go so let's touch upon custom models so custom models allow you to extract text key value pair selection marks and tabular data from your forms these models are trained with your data so they're tailored to your forms you only need five samples uh sample input forms to start a trained document processing model can output structured data that includes the relationship in the original form document after you train the model you can test and retrain it and eventually use it reliably extract data from uh more forms according to your needs you have two learning options you have unsupervised learnings to understand the layout and Rel ship between Fields entries and your forms and you have supervised learning to extract values of Interest using the labeled form so we've covered unsupervised and supervisor learning so you're going to be very familiar with these two [Music] okay so Forum recognizer service has many pre-built models that are uh easy to get uh started with and so let's go look at them and see what kind of fields it extracts out by default so the first is receipts so sales receipts from Australia Canada great Britain India United States will work great here and the fields it will extract out is receipt type Merchant name Merchant phone number Merchant address transaction date transaction time total subtotal tax tip items name quantity price total price if there's information that is on a a receipt that you're not getting out of these fields and that's where you make your own custom model right for business cards it's only available for English business cards but uh we can extract out contact names first name last name company names departments job titles emails websites addresses mobile phones faxes work phones uh and other phone numbers not sure how many people are using uh business cards these days but hey they have it as an option for invoices extract data from invoices in various formats and return structured data so we have customer name customer ID purchase order invoice ID invoice date due date vendor name vendor address vendor address receipt customer address customer address receipt billing address billing address receipt shipping address subtotal total tax invoice total amount due service address uh remittance address uh start service start date and end date uh previous unpaid balance and then they even have one for line items so items amount description quantity unit price product code unit date tax and then for IDs which could be worldwide passports us driver license things like that um you would have Fields such as country region date of birth date of expire expiration document name first name last name nationality sex machine readable zone I'm not sure what that is document type and address and region uh and there are some additional features with some of these models we didn't really cover them it's not that important but um yeah there we [Music] go hey this is Andrew Brown from exam Pro and we were looking at natural understanding or LS or Louise depends on how You' like to say it and this is a no code ml service to build language uh natural language into Apps Bots and iot devices to quickly create Enterprise ready custom models that continuously improve so Lewis I'm going to just call it lwis cuz that's what I prefer is access via its own isolate domain at lewis. and it utilizes NLP and nlu so nlu is the ability to perform or ability to transform a linguistic statement to a representation that enables you to understand your users naturally and it is intended to focus on intention and extraction okay so where the users want or sorry what the users want and what the users are talking about so uh the loose application is composed of a schema and the schema is autogenerated for you when you use the Lewis AI web interface so you definitely AR going to be writing this by hand but it just helps to see what's kind of in there if you do have some programmatic skills you obviously you can make better use of the service than just the web interface but the schema defines intentions so what the users are asking for a l app always contains a nonone intent we'll talk about why that is in a moment and entities what parts of the intent is used to determine the answer then you also have utterances so examples of the user input that includes intent and entities to train the ml model to match predictions against the real user input so an intent requires one or more example utterance for training and it is recommended to have 15 to 30 example utterances to explicitly train uh to ignore an utterance you use the nonone intent so intent classifies user utterances and entities extract data from utterances so hopefully that understands I always get this stuff mixed up it always takes me a bit of time to understand there is more than just these things there's like features and other things but you know for the AI 900 we don't need to go that deep okay uh just to get to visualizing this to make a bit easier so imagine we have this uh this utterance here these would be the identities so we have two and Toronto this is the example utterance and then the idea is that you'd have the intent and the intent and if you look at this keyword here this really helps where it says classify it's that's what what it is it's a classification of this example utterance and that's how the ml model is going to learn [Music] okay hey this is Andre Brown from exam Pro and we are looking at Q&A maker service and this is a cloud-based NLP service that allows you to create a natural conversational layer over your data so Q&A maker is hosted on its own isolate domain at Q&A maker. it will help the most uh it will help you find the most appropriate answer from any input from your custom knowledge base of information so you can commonly uh it's commonly used to build conversation clients which include social apps chat Bots speech enabled uh desktop applications uh Q&A maker doesn't store customer data all the customer data stored in the region the customer deploys the the dependent Services instances within okay so let's look at some of the use cases for this so when you have static information you can use Q&A maker uh in your knowledge base of answers this knowledge base is customed to your needs which you've built with documents such as PDF and URL when you want to provide the same answer to a repeat question command when different users submit the same question the answers is returned when you want to filter stack information based on meta information so metatag data is provide uh provides additional filtering options relevant to your client application users and information common meded information includes chitchat content type format content purpose content freshness and there's a use case when you want to manage a bot conversation that includes static information so your knowledge base takes uh takes a user conversation text or command and answers it if the answer is part of a predetermined conversation flow represented in the knowledge base with multiple TurnKey context the bot can easily provide this flow so Q&A maker Imports your content into a knowledge base of questions and answer Pairs and Q&A maker can build your knowledge base from an existing document manual or website you're all docx PDF I thought this was the coolest thing so you can just basically have anyone write a doc X as long as it has a heading and a and text and I think you can even extract at images and it'll just turn it into uh the bot it just saves you so much time it's crazy it will use ml to extract the question and answer pairs the content of the question of answer pairs include all the alternate forms of the question metad dag tags used to filter choices during the search followup prompts to continue to search refinement uh refinement K maker stores answers text in markdown once your knowledge base is imported you can finetune the imported results by editing the question and answer pairs as seen here uh there is the chat box so you can converse with your Bot through a chat box I wouldn't say it's particularly a feature of Q&A maker but I just want you to know that's how You' interact with it so when you're using the Q&A maker AI the Azure bot service the bot composer um or via channels you'll get an embeddable one you'll see this box where you can start typing in your questions and and get back the answers to test it here an example is a multi-term conversation so somebody asked a question a generic question and that said hey are you talking about adus or Azure which is kind of like a follow-up prompt and we'll talk about multi-turn here in a second but uh that's something I want you to know about okay so chitchat is a feature in Q&A maker that allows you to easily add prepopulated sets of top Chit Chats into your knowledge base the data set has about 100 scenarios of chitchat in voices of multiple personas so the idea is like if someone says something random like how are you doing what's the weather today things that your Bot wouldn't necessarily know it has like canned answers and it's going to be different based on how you want the response to be okay uh there's a concept of layered ranking so uh Q&A maker system is a layered ranking approach the data is stored in Azure search which also serves as the first ranking layer the top result for uh from Azure search are then passed through Q&A makers NLP reranking model to produce the final results and confidence score T touching on multi- turn conversation is a follow-up prompt and context to manage the multiple turns known as multi-t for your Bot from one question to another when a question can't be answered in a single turn that is when you're using multi-term conversation so Q&A maker provides multi-term prompts and active learning to help you improve your questions based on key and answer Pairs and gives you the opportunity to connect questions and answer pairs the connection allows the client application to provide a top answer and provide more questions to refine the search for a final answer after the knowledge base receives questions from users at the publish endpoint cre maker applies active learning to these rule work questions to suggest changes to your knowledge base to improve the quality all right [Music] hey this is Andrew Brown from exam Pro and we are looking Azure bot service so the Azure bot service is an intelligent servess bot service that scales on demand used for creating publishing and managing bots so you can register and publish a variety of bots from the Azure portal so here there's a bunch of ones I've never heard of um probably with thirdparty providers partnered with Azure and then there's the ones that we would know like the Azure healthbot the Azure bot or the uh web app bot which is more of a generic one so Azure Bop service Bop bot service can integrate your Bot with other Azure Microsoft or thirdparty service uh Services via channel so you can have a direct line Alexa Office 365 Facebook keik line Microsoft teams Skype uh twio and more all right and two things that are commonly associated with the Azure bot service is the bot framework and Bot composer in fact it was really hard just to make make this slide here because they just weren't very descriptive on it cuz they wanted to push these other two things here but let's talk about the bot framework SDK so the bot framework SDK which is now version four is an open source SDK that enables developers to model and build sophisticated conversations the bot framework uh along with the Azure bot service provides an end-to-end workflow so we can design build test publish connect and evaluate our uh Bots okay with this with this framework developers can create Bots that use speech understand uh natural language handle questions answers and more the bot framework includes a module accessible SDK for building Bots as well as tools templates and related AI Services then you have bot framework composer and uh this is built on top of the bot framework SDK it's an open source IDE for developers to author test provision and manage conversational experiences you can download it's an app on Windows OS X and Linux is probably built uh using uh like uh web technology and so here is the actual uh app there and so you can see there's kind of a bit of a flow and things you can do in there so you can either use C or not to build your Bot you can deploy the bot to the Azure web apps or Azure functions you have templates to build Q&A maker bot Enterprise or personal assistant bot language bot calendar or people bot uh you can test and debug via the bot framework emulator uh and it has a built-in package manager there's a lot more to these things but again at the AI 900 this is all we need to know um but yeah there you [Music] go hey this is Andrew Brown from exam Pro and we are looking at Azure machine learning service I want you to know there's a classic version of the service it's still accessible in the portal this is not an exam we are going to 100% avoid it uh it has severe limitations we cannot transfer anything over from the classic to the new one uh so the one we're going to focus on is the Azure machine learning service you do create Studios within it so you'll hear me say Azure machine Learning Studio and I'm referring to the new one a service that simplifies running AI ml work related workloads allowing you to build flexible automated ml pipelines use python or Run Deep learning workloads such as tensorflow we can make Jer notebooks in here so build and document your machine learning models as you build them share and collaborate Azure machine learning SDK for python so an SDK designed specifically to interact with the Azure machine learning Services it does ML Ops machine learning operations so end to end automation of ml model pipelines cicd training inference aure machine learning designer uh uh so this is a drag and drop interface to visually build test deploy machine learning models uh technically pipelines I guess as a data labeling service so assemble a team of humans to label your training data responsible machine learning so model fairness through disparity metrics and mitigate unfairness at the time of this Ser this is not very good but it's supposed to tie in with the responsible AI that Microsoft is always promoting okay so once we launch our own uh Studio within Azure machine learning service you're going to get this nice big bar uh or navigation left hand side it shows you there's a lot of stuff that's in here so let's just break it down and what all these things are so for authoring we got notebooks these are jupyter notebooks and IDE to write python code to build ml models they kind of have their own preview which I don't really like but there is a way to bridge it over to jupyter notebooks or into Visual Studio code we have automl completely automated process to build and train ml models it's you're limited to only three types of models but still that's great we have the designer so visual drag and drop designer to construct end to-end ml pipelines for assets we have data sets so data you can upload which we will be used which will be used for for training experiments when you run a training job they are detailed here uh pipelines ml workflows that you have built or have used in the designer model so a model registry containing train models that can be deployed endpoints so when you deploy a model it's hosted on accessible endpoint so you're going to be able to uh access it via a rest API or maybe the SDK uh for manage we got compute the underlying Computing instances used uh for notebooks training and inference environments are reproducible uh python environment for machine learning experiments data stores a data repository where your data resides data labeling uh so you have a human with ML assisted labeling to label your data for supervised Learning Link services external service you can connect to the workspace such as Azure synapse [Music] analytic let's take a look at uh the types of compute that is available in our Azure machine Learning Studio we got four categories uh we have compute instances to uh development workstations that data scientists can use to work with data and models compute clusters to scalable clusters of VMS for on demand processing experimentation code deployment targets for Predictive Services that use your trained models and attach compute links to existing Azure compute resources such as uh Azure VMS uh and Azure data brick clusters now uh what's interesting here is like with this compute you can see that you can open it in Jupiter Labs jupyter vs code R studio and terminal but you can you can work with uh your computer instances your development workstations uh directly in the studio which that's the way I do it um what's interesting is for inference that's when you want to make a prediction you use Azure kubernetes service or Azure container instance and I didn't see it show up under here so I'm kind of confused whether that's where it appears U maybe we'll discover as we do the follow alongs that they do appear here but uh I'm not sure about that one but yeah those are the four there okay So within Azure machine Learning Studio we can do some data labeling so we create data labeling jobs to prepare your ground Truth for supervised learning we have two options human in the loop labeling you have a team of humans that will apply labeling these are humans you grant access to labeling uh machine learning assisted daily labeling you will use ml to perform uh labeling so you can export the label data from for machine learning experimentation at any time uh your users often export multiple times and train different models rather than wait for all the images to be labeled images labeled can be exported in Coco format that's why we talked about Coco uh a lot earlier in our data set section as your machine learning data set and this is the data set format that makes it easy to use for training and Azure machine learning so generally you want to use that format the idea is you would choose a labeling task type uh and that way you would have this UI and then people would go in and and just click buttons and do the labeling okay so aure ml data store securely connects you to storage service services on Azure without putting your authentication credentials and the Integrity of your original data source at risk so here is the example of data source that are available to us in the studio and let's just go quickly through them so we have Azure blob storage this is data that is stored as objects distributed across many machines Azure file share am mountable file share via SMB and NFS protocols Azure data Lake storage Gen 2 um this is blob storage design for vast amounts of big data analytics Azure SQL this is a fully managed mssql relational database azure postest datas this is an open source relational database often considered an object related database preferred by developers a your MySQL another open source relational database the most popular one and considered a pure relational database okay so aure ml data sets makes it easy to register your data sets for use with your ml workload so what you do is you'd add a data set and you get a bunch of metadata associated with it uh and you can also upload uh addition like the data set again to have multiple versions so you'll have a current version and a latest version uh it's very easy to get started working with them because they'll have some sample code that's for the Azure ml SDK uh to import that into uh into your Jupiter notebooks uh for data sets you can generate profiles that will give you summary statistics distribution of data and more you will have to use a compute instance to generate that data so you press the generate profile and you'd have that stored I think it's in Blob storage there are open data sets these are publicly hosted data sets that are commonly used for learning how to build ml models so if you go to open data sets you just choose one and so this is a curated list of open data sets that you can quickly add to your data store great for learning how to use autom ml or aszure machine learning designer or any kind of uh ml uh workload if you're new to it that's why we covered mnist and Coco earlier just because those are some common data sets there but there you [Music] go taking a look here at Azure ml experiments this is a logical grouping of azure runs and runs act uh is the act of running an ml task on a virtual machine or container so here's a list of them and uh it can run various uh types of ml tasks so scripts could be pre-processing autom ml uh a training pipeline but what it's not going to include is inference and what I mean is once you've deployed your model or Pipeline and you uh uh make predictions via a request it's just not going to show up under here [Music] okay okay so we have azure ml pipelines which is an exec workflow of a complete machine learning task not to be confused with Azure pipelines which is part of azure devops or data Factory which has its own pipelines it's a total a totally separate thing here so subas are encapsulated is a series of steps within the pipeline independent steps allow multiple data scientists to work on the same pipeline at the same time without over taxing compute resources separate steps also make it easy to use different compute type sizes for each step when you rerun a pipeline the Run jumps to the steps that need to be rerun such as the updated training script steps do not need to be rerun and they will be skipped after a uh pipeline has been published you can configure a rest endpoint which allows you to rerun the pipeline from any platform or stack there's two ways uh to build pipelines you can use the Azure ml designer or programmatically using Azure machine learning python SDK so here is example of some code just make a note here I mean it's not that important but no it's just you create steps okay and then you assemble all the steps into a pipeline line here all [Music] right so as your machine learning designer lets you quickly build as your ml pipelines without having to write any code so here is what it looks like and over there you can see our pipeline is quite Visual and on the left hand side you have a bunch of assets you can drag out that are pre-built there so uh it's a really fast way for building a pipeline so you do have to have a good understanding of um ml pipelines nend to end to make good use of it uh once you you trained your pipeline you can create an inference pipeline so you drop down and you say whether you want it to be real or batch and you can TW toggle between them later so I mean there's a lot to this service but for the AI 900 we don't have to go uh diving too deep okay so Azure ml models or the model registry allows you to create manage and track your registered models as incremental versions under the same name so each time you register a model with the same name as an existing one the registry assures that it's a new version Additionally you can provide metadata tags and use tags when you search for models so yeah it's just really easy way to share and deploy or download your models [Music] okay Azure MLM points allow you to deploy machine learning models as a web service so the workflow for deploying model is register the model prepare an entry script prepare an inference configuration deploy the model locally to ensure everything works compute uh choose a compute Target redeploy the model uh to the cloud test the resulting web service so we have uh two options here real time endpoints so an endpoint that provides remote access to invoke the ml model uh service running on either Azure kubernetes service AKs or Azure container instance ACI then we have pipeline endpoints so endpoint that provides remote access to invoke an ml pipeline you can parameterize the pipeline endpoint for manage repeatability in batch scoring and retraining scenarios um and so you can deploy a model to an endpoint it will either be deployed to a AKs or ACI as we said earlier uh and the thing is is that when you do do that just understand that that's going to be shown under the AKs or ACI um within the Azure portal it's not Consolidated under the Azure machine Learning Studio when you've deployed a realtime endpoint you can test the endpoint by sending either a single request or batch request so they have a nice form here where it's single or it's um like here it's a CSV that you can send so there you go so Azure has a built-in jupyter like notebook editor so you can build and train your ml models and so here is an example of it I personally don't like it too much but that's okay because we have some other options to make it easier but what you do is you choose your compute uh instance to run the notebook you'll choose your kernel uh which is a pre-loaded programming language and programming libraries for different use cases but that's a Jupiter kernel uh concept there uh so you can open the notebook in a more familiar ID such as vs code jupyter notebook classic or Jupiter lab so you go there drop it down choose it and open it up and now you're in a more familiar territory the vs code one is exactly the same experience as the um the one in Azure or Azure ml Studio I personally don't like it I think most people are going to be using the notebooks but it's great that they have all those [Music] options so Azure automated machine learning also known as autom ml automates the process of creating an ml model so with Azure automl you supply a data set choose a task type uh and then automl will train and tune your model so here are task types let's quickly go through them so we have classification when you need to make a prediction based on several classes so binary classification multiclass classification regression when you need to predict a continuous number value and then a Time series forecasting when you need to predict the value based on time so just look at them a little bit more in detail so classification is a type of supervised learning in which the model learns using training data and appli those learnings to new data so here is an example uh or this is just the option here and so the goal of classification is to predict which categories new data will fall into based on learning from its training data so binary classification is a record uh is labeled out of two possible labels so maybe it's true or false zero or one it's just two values multiclass classification is a record is labeled out of range of out of a range of labels uh and so it could be like happy sad mad or rad and just you know I can see there's a smelling mistake there but yeah there should be an F so let's just correct that there we go uh you can also apply deep learning and so if you turn deep learning on you probably want to uh use a gpus uh compute instance just because or um compute cluster Because deep learning really prefers uh gpus okay looking at regression it's also a type of supervised learning where the model learns uh using training data and applies those learnings to new data but it's a bit different where the goal of regression is to predict a variable in the future uh then you have time series forecasting and this sounds a lot like um uh regression because it is so forecast Revenue inventory sales or customer demand an automated time series experiment that is treated as a multivariant regression problem past time series values are pivoted to become additional dimensions for the regressor together with other predictors and unlike classical time series methods has an advantage of naturally incorporating multiple contextual variables and their relationship to one another during training so use cases here or Advanced configurations I should say holiday detection and featurization time series uh deep learning uh neural networks so you got Auto arima profit forecast TCN uh many model supports through grouping rolling origin cross validation configurable Labs rolling window aggregate features so there you [Music] go so within automl we have data guard rails and these are run by automl when automatic featurization is enabled it's a sequence of checks to ensure highquality input data is being used to train the model so just to show you some information here so the idea is it could apply validation split handling so the input data has been split for validation to improve the performance then you have missing feature value uh imputation so no features missing values were detected in training data High cardinality feature detection your inputs were analyzed and no no high cardinality features were detected High cardinality means like if you have too many dimensions it becomes very dense or hard to process the data um so that's something good to check [Music] against let's talk about autom ml's automatic featurization so during model training with autom ML one of the following scaling or normalization techniques will be applied to each model the first is standard scale wrapper so standardized features by removing the mean and scaling to unit variance minmax scaler transform features by scaling each feature by the column is minimum maximum Max ABS scaler scale each feature by its maximum absolute value robust scar scales features by the quanti quantile range PCA linear dimensionality reduction using single uh value decomposition of the data to project it to lower dimensional space uh uh Dimension uh reduction is very useful if your data is too complex if let's say you have data and you have too many labels like 20 20 30 40 labels for per like for categories to pick out of you want to reduce the dimensions so that your machine learning model is not overwhelmed so then you have truncated SVD wrappers so the Transformer performs linear dimensionality reduction by means of truncated single singular value decomposition contrary to PCA the estimator does not Center the data uh before Computing the singular value decomposition which means it can work with spicy sparse matrices efficiently sparse normalization each sample that is each row of the data Matrix which uh with at least one zero component is rescaled independently of other samples so that is Norm so one L or two L2 I can't remember it's I2 or L anyway i1 and and I2 okay so the thing is is that on the exam they're probably not going to be asking these questions but I just like to get you exposure but I just want to show you that automl is doing all this this is like pre-processing stuff you know like this is stuff that you'd have to do and so it's just taking care of the stuff for you [Music] okay so within Azure automl they have a feature called Model selection and this is the task of selecting a statistical model from a set of candidate models and Azure autom ml will use different uh or many different ml algorithms that will recommend the best performing candidate so here is a list and I want to just point out down below there's three pages there's 53 models it's a lot of models and so you can see that the it chose as its top candidate was called voting Ensemble that's an ensemble uh um algorithm that's where you take two weak ml models combine them together to make a more uh uh Stronger one and notice here it will show us the results and this is what we're looking for which is the primary metric the highest value should indicate that that's the model we should want to use you can get an explanation of the model called uh that's known as explainability and now if you're a data scientist you might be a bit smarter and say well I know this one should be better so I'll use this and tweak it but you know if you don't know what you're doing you just go with the top one [Music] okay so we just saw that we had a top candidate model and there could be an explanation to understand as to the effectiveness of this this is called MXL so machine learning explainability this is the process of explaining interpreting ml or deep learning models MX mlx can help machine learning developers to better understand interpret models Behavior so after your top candidate model is selected by Azure automl you can get an explanation of internals of various factors so model performance uh data set Explorer aggregate feature importance individual feature importance so I mean yeah this is aggregate so what it's looking at and it's actually cut off here but it's saying that these are the most important ones that affect how the models outcome so I think this is the diabetes dat a data set so BMI uh would be one that would be a huge influencer there okay [Music] so the primary metric is a parameter that determines the metric to be used during the model training for optimization so for classification we have a few and regression and time series we have a few but you'll have these task types and underneath you'll choose the additional configuration and that's where you can override the primary metric uh it might just Auto detected for you so you don't have to because it might sample some of your data set to just kind of guess U but you might have to override it yourself uh just going through some scenarios um and we'll break it down into two categories so here we have suited for larger data sets that are well balanced well balanced means that your data set like is evenly distributed so if you have uh uh CL classifications for A and B let's say you have 100 and 100 they're well balanced right you don't have one data set much a subset of your dat set much larger than the other that's labeled so for accuracy this is great for image classification sentiment analysis CH prediction for average Precision score weighted it's for sentiment analysis Norm macro recall term prediction for precision score weighted uh uncertain as to what that would be good for maybe sentiment analysis suited for smaller data sets that are inbalance so that's where your data set like you might have like 10 records for one and 500 for the other on the label so you have Au weighted fraud detection image classification anomaly detection spam detection onto regression scenarios uh will break it down into ranges so when you have a very wide range uh Spearman correlation works really well R2 score this is great for airline uh delay salary estimation bug res resolution time we're looking at smaller ranges where you're talking about normalize root square mean to error so price predictions um review tips score predictions for normalized mean absolute error um it's going to be just another one here they don't give a description for time series it's the same thing it's just in the context of Time series so forecasting all [Music] right another option we can change is the validation type when we're setting up our ml model so Val validation model validation is when we compare the results of our training data set to our test data set model validation occurs after we train the model and so you can just drop it down there we have some options so Auto kfold cross validation Monte Carlo cross validation train validation split I'm not going to really get into the details of that I don't think it'll show up on the AI 900 exam but I just want you to be aware that you do have those options [Music] okay hey this is Andrew Brown from exam Pro and we are taking a look here at custom vision and this is a fully managed no code service to quickly build your own classification and object detection ml models the service is hosted on its own isolate domain at www.com vision. so the first idea is you upload your images so bring your own labeled images or custom Vision to quickly add tags to any unlabeled data images you use the labeled images to teach custom Vision the concepts you care about which is training and you use a simple rest API that calls uh to quickly tag images with your new custom computer vision model so you can evaluate [Music] okay so when we launch custom Vision we have to create a project and with that we need to choose a project type and we have classification and object detection reviewing classification here you have the option between multi-label so when you want to apply many tags to an image so think of an image that contains both a cat and a dog you have multi class so when you only have one possible tag to apply to an image so it's either an apple banana and orange it's not multiples of these things you have object detection this is when we want to detect various objects in an image uh and you also need to choose a domain a domain is a Microsoft managed data set that is used for training the ml model there are different domains that are suited for different use cases so let's go take a look first at image classification domains so here is the big list the domains being over here okay and we'll go through these here so General is optimized for a broad range of image classification tasks if none of the uh uh if none of the other specified domains are appropriate or you're unsure of which domain to choose select one of the general domains so G uh or A1 is optimized for better accuracy with comparable inference time as general domain recommended for larger data sets or more difficult user scenarios this domain requires a more training time then you have A2 optimized for better accuracy with faster advert times than A1 and general domains recommended for more most data sets this domain requires less training time than General and A1 you have food optimized for photographs or dishes of as you would see them on a restaurant menu if you want to classify photographs of individual fruits or vegetables use food domains uh so then we have optimize for recognizable landmarks both natural and artificial this domain works best when Landmark is clearly visible in the photograph this domain works even if the land mark is slightly um obstructed by people in front of it then you have retail so optimize for images that are found in a shopping cart or shopping uh website if you want a high Precision classifying uh classifying between dresses pants shirts use this domain compact domains optimized for the constraints of real-time classification on the edge okay then uh we have object detection domain so this one's a lot shorter so we'll get through a lot quicker so optimize for a broad range of object detection tasks if none of the uh other domains are appropriate or you're unsure of which domain choose the general one A1 optimized for better accuracy and comparable inference time than the general domain recommended for most accurate region locations larger data sets or more difficult use case scenarios the domain requires more training and results are not deterministic expect uh plus minus 1% mean average Precision difference uh with the same training data provided you have logo optimized for finding Brands uh logos and images uh products on shelves so optimized for detecting and classifying products on the shelves so there you [Music] go okay so let's get some uh more practical knowledge of the service so for image classification you're going to upload multiple images and apply a single or multiple labels to the entire image so here I have a bunch of images uploaded and then I have my tags over here and they could either be multi or singular for object detection you apply tags to objects in an image for data labeling and you hover uh your cursor over the image custom Vision uses ml to show boundaries uh bounding boxes of possible objects have not yet been labeled if it does not detect it you can also just click and drag to draw out whatever Square you want so here's one where I tagged it up quite a bit you have to have at least 50 images on every tag to train uh so just be aware of that when you are tagging your images uh when you're training your model is ready when you and you have two options so you have quick training this trains quickly but it will be less accurate you have Advanced Training this increases compute time to improve your results so for Advanced Training BAS basically you just have this thing that you move to the right uh with each iteration of training our ml model will improve the evaluation metric so precision and recall it's going to vary we're going to talk about the metrics here in a moment but the probability threshold value determines when to stop training when our evaluation metric meets our desired threshold so these are just additional options where when you're training you can move this left to right uh and these left to right okay and then when we get our results back uh we're going to get um some metrics here so uh we have evaluation metric so we have Precision being exact and accurate selects items that are relevant recall such sensitivity or known as true positive rate how many relevant items returned average Precision it's important that you remember these because they might ask you that on the exam so for uh cut when we're looking at object detection and we're looking at the evaluation metric outcomes for this one we have Precision recall and mean average Precision uh once we've deployed our pipeline it makes sense that we go ahead and give it a quick test to make sure it's working correctly so you press the click quick test button and you can upload your image and it will tell you so this one says it's Warf uh when you're ready to publish you just hit the publish button and then you'll get uh some prediction URL and information so you can invoke it uh one other feature that's kind of useful is the Smart labeler so once you've loaded some training data within it can now make suggestions right so you can't do this right away but once it has some data it's like it's like kind of a prediction that is not 100% guaranteed right and it just helps you build up your training uh data set a lot faster uh very useful if you have a very large data set this is known as ml assisted labeling [Music] okay hey this is Andrew Brown from exam Pro and in this section we'll be covering the newly added section to the AI 900 that focuses on generative AI generative AI including Technologies like chat GPT is becoming more recognized outside of tech circles while it may seem magical in its ability to produce humanlike content it's actually based on Advanced mathematical techniques from statistics data science and machine learning understanding these Core Concepts can help Society Envision new AI possibilities for the future first let's compare the differences between regular AI versus generative ai ai refers to the development of computer systems that can perform tasks typically requiring human intelligence these include problem solving decision-making understanding natural language recognizing speech and images and more the primary goal of traditional AI is to create systems that can interpret analyze and respond to human actions or environmental changes efficiently and accurately it aims to replicate or simulate human intelligence in machines AI applications are vast and include areas like expert systems natural language processing speech recognition and Robotics AI is used in various Industries for tasks such as customer service chatbots recommendation systems in e-commerce autonomous vehicles and medical diagnosis on the other hand generative AI is a subset of AI that focuses on creating new content or data that is novel and realistic it does not just interpret or analyze data but generates new data itself it includes generating text images music speech and other forms of media it often involves Advanced machine learning techniques particularly deep learning models like generative adversarial networks variational autoencoders and Transformer models like GPT generative AI is used in a range of applications including creating realistic images and videos generating human-like text composing music creating virtual environments and even drug Discovery some examples include tools like GPT for text generation doly for image creation and various deep learning models that compose music so let's quickly summarize the differences of regular AI with generative AI across three features functionality data handling and applications regular AI focuses on understanding and decision making whereas generative AI is about creating new original outputs in terms of data handling regular AI analyzes and bases decisions on existing data while generative AI uses the same data to generate new previously unseen outputs and for applications regular I scope includes data analysis automation natural language processing and Healthcare in contrast generative AI leans towards more creative and Innovative applications such as content creation synthetic data generation deep fakes and Design the next topic we'll be covering is what is a large language Model A large language model such as GPT Works in a way that's similar to a complex automatic system that recognizes patterns and makes predictions training on large data sets initially the model is trained on massive amounts of text Data this data can include books articles websites and other written material during this training phase the model learns patterns and language such as grammar word usage sentence structure and even style and tone understanding context the model's design allows it to consider a wide context this means it doesn't just focus on single words but understands them in relation to the words and sentences that come before and after this context understanding is important for generating coherent and relevant text predicting the next word when you give the model a prompt which is a starting piece of text it uses what it has learned to predict the next most likely word it then adds this word to the prompt and repeats the process continually predicting the next word based on the extended sequence generating text this process of predicting the next word continues creating a chain of words that forms a coherent piece of text the length of this generated text can vary based on Specific Instructions or limitations set for the model refinement with feedback the model can be further refined and improved over time with feedback this means it gets better at understanding and generating text as it is exposed to more data and usage in summary a large language model works by learning from a vast quantity of text Data understanding the context of language and using this understanding to predict and generate new text that is coherent and contextually appropriate which can be further refined with feedback as shown in the workflow [Music] image next let's talk about Transformer models so a Transformer model is a type of machine learning model that's especially good at understanding and generating language it's built using a structure called the Transformer architecture which is really effective for tasks involving natural language processing like translating languages or writing text trans Transformer model architecture consists of two components or blocks first we have the encoder this part reads and understands the input text it's like a smart system that goes through everything it's been taught which is a lot of text and picks up on the meanings of words and how they're used in different contexts then we have the decoder So based on what the encoder has learned this part generates New pieces of text it's like a skilled writer that can make up sentences that flow well and make sense there are different types of Transformer models with specific jobs for example Bert is good at understanding the language it's like a librarian who knows where every book is and what's inside them Google uses it to help its search engine understand what you're looking for GPT is good at creating text it's like a skilled author who can write stories articles or conversations based on what it has learned so that's an overview of a transformer model next we'll be talking about the main components of a transformer model the next component of a transformer model we'll be covering is the the tokenization process tokenization in a Transformer model is like turning a sentence into a puzzle for example you have the sentence I heard a dog bark loudly at a cat to help a computer understand it we chop up the sentence into pieces called tokens each piece can be a word or even a part of a word so for our sentence we give each word a number like this I might be one her might be two a might be three do might be four bark might be five loudly might be six at might be seven is already token is three tap might be eight now our sentence becomes a series of numbers this is like giving each word a special code the computer uses these codes to learn about the words and how they fit together if a word repeats like a we use its code again instead of making a new one as the computer reads more text it keeps turning new words into new tokens with new numbers if it learns the word meow it might call it nine and skateboard could be 10 by doing this with lots and lots of text the computer builds a big list of these tokens which it then uses to stand and generate language it's a bit like creating a dictionary where every word has a unique number the next component of a transformer model will be covering our embeddings so to help a computer understand language we turn words into tokens and then give each token a special numeric code called an embedding these embeddings are like a secret code that captures the meaning of the word as a simple example suppose the embeddings for our tokens consist of vectors with three elements for example four for dog has the embedding vectors 10 32 five for bark has the vectors 10 22 eight for cat the vectors are 10 3 1 nine for meow the vectors are 10 2 1 and 10 for skateboard as the vectors 3 3 one which is quite different from the rest words that have similar meanings or are used in similar ways get codes that look alike so dog and bark might have similar codes because they are related but skateboard might be off in a different area because it's not much related to these other words words this way the computer can figure out which words are similar to each other just by looking at their codes it's like giving each word a home on a map and words that are neighbors on this map have related meanings the image shows a simple example model in which each embedding has only three dimensions real language models have many more Dimensions tools such as word TWC or the encoding part of a transformer model help AI to figure out where each word dot should go on this big map let's go over positional encoding from a Transformer model positional encoding is a technique used to ensure that a language model such as GPT doesn't lose the order of words when processing natural language this is important because the order in which words appear can change the meaning of a sentence let's take the sentence I heard a dog bark loudly at a c from our previous example without positional encoding if we simply tokenize this sentence and convert the tokens into embedding vectors we might end up with a set of vectors that lose the sequence information positional encoding adds a positional Vector to each word in order to keep track of the positions of the words however by adding positional en coding vectors to each words embedding we ensure that each position in the sentence is uniquely identified the embedding for I would be modified by adding a positional Vector corresponding to position one labeled I 1 the embedding for herd would be altered by a vector for position two labeled herd 2 the embedding for a would be updated with a vector for position three labeled a 3 and reused with the same positional vector for its second occurrence this process continues for each word token in the sentence with dog four bark five loudly six at seven and Cat 8 all receiving their unique positional encodings as a result the sentence I heard a dog bark loudly at a cat is represented not just by a sequence of vectors for its words but by a sequence of vectors that are influenced by the position of each word in the sentence this means that even if another sentence had the same words in a different order its overall representation would be different because the positional encodings differ reflecting the different sequence of words so that's an overview of positional encoding the next component of a transformer we'll be covering is attention attention in AI especially in Transformer models is a way the model figures out how important each word or token is to the meaning of a sentence particularly in relation to the other words around it let's reuse the sentence I heard a DOT bark loudly at a cat to explain this better self attention imagine each word word in the sentence shining a flashlight on the other words the brightness of the light shows how much one word should pay attention to the others when understanding the sentence for bark the light might shine brightest on dot because they're closely related encoder's role in the encoder part of a transformer model attention helps decide how to represent each word as a number or vector it's not just the word itself but also its context that matters for example bark in the bark of a tree would have a different representation than bark and I heard a DOT bark because the surrounding words are different decoder's role when generating new text like completing a sentence the decoder uses attention to figure out which words it already has are most important for deciding what comes next if our sentence is I heard a dog the model uses attention to know that her and dog are key to adding the next word which might be bark multi-head attention this is like having multiple flashlights each highlighting different aspects of the words maybe one flashlight looks at the meaning of the word another looks at its role in the sentence like subject or object and so on this helps the model get a richer understanding of the text building the output the decoder builds the sentence one word at a time using attention at each step it looks at the sentence so far decides what's important and then predicts the next word it's an ongoing process with each new word influencing the next so attention in Transformer models is like a guide that helps the AI understand and create Language by focusing on the most relevant parts of the text considering both individual word meanings and their relationships within the sentence let's take a look at the attention process token embeddings each word in the sentence is represented as a vector of numbers or its embedding predicting the next token the goal is to figure out what the next word should be also represented as a vector as signing weights the attention layer looks at the sentence so far and decides how much influence each word should have on the next one calculating attention scores using these weights a new Vector for the next token is calculated which includes an attention score multi-head attention does this several times focusing on different aspects of the words choosing the most likely word a neural network takes these vectors with attention scores and picks the word from the vocabulary that most likely comes next adding to the sequence The Chosen word is added to the existing sequence and the process repeats for each new word so let's use gp4 is an example for how this entire process works explained in a simplified manner a Transformer model like gp4 works by taking a text input and producing a well structured output during training it learns from a vast array of text Data understand understanding how words are typically arranged in sentences the model knows the correct sequence of words but hides future words to learn how to predict them when it tries to predict a word it Compares its guess to the actual word gradually adjusting to reduce errors in practice the model uses its training to aside importance to each word in a sequence helping it guess the next word accurately the result is that gp4 can create sentences that sound like they were written by a human however this doesn't mean the model knows things or is intelligent in the human sense it's it's simply very good at using its large vocabulary and training to generate realistic text base on word relationships so that's an overview of attention in a Transformer [Music] model hey this is Andrew Brown from exam Pro and in this section we'll be going over an introduction to Azure openai service Azure open AI service is a cloud-based platform designed to deploy and manage Advanced language models from open AI this service combines open I's latest language model development with the robust security and scalability of azure's cloud infrastructure Azure open AI offers several types of models for different purposes gp4 models these are the newest in the line of GPT models and can create text and programming code when given a prompt written in natural language GPT 3.5 models similar to GPT 4 these models also create text and code from natural language props the GPT 3.5 turbo version is specially designed for conversations making it a great choice for chat applications and other interactive AI tasks embedding models these models turn written text into number sequences which is helpful for analyzing and comparing different pieces of text to find out how similar they are doll e models these models can make images from descriptions given in words the doll e models are still being tested and are shown in the Azure Open aai studio so you don't have to set them up for use manually key Concepts in using Azure open AI include prompts and completions tokens resources deployments prompt engineering and various models prompts and completions users interact with the API by providing a text command in English known as a prompt and the model generates a text response or completion for example a prompt to C to five and a loop results in the model returning appropriate code tokens asure open AI breaks down text into tokens which are words or character chunks to process requests the number of tokens affects response latency and throughput for images token cost varies with image size and detail setting with low detail images costing fewer tokens and high detail images costing more resources Azure open aai operates like other Azure products where users create a resource within their Azure subscription deployments to use the service users must deploy a model via deployment apis choosing the specific model for their needs prompt engineering crafting prompts is crucial as they guide the model's output this requires skill as prompt construction is nuanced and impacts the model's response models various models offer different capabilities and pricing Dolly creates images from text while whisper transcribes and translates speech to text each has unique features suitable for different tasks so that's an overview of azure open ey service the next topic we'll be covering is azure open AI Studio developers can work with these models in Azure open AI Studio A web-based environment where AI professionals can deploy test and manage llms that support generative AI app development on Azure access is currently limited due to the high demand upcoming product improvements and Microsoft's commitment to responsible AI presently collaborations are being prioritized for those who already have a partnership with Microsoft are engaged in lower risk use cases and are dedicated to including necessary safeguards in Azure open AI Studio you can deploy large language models provide F shot examples and test them in Azure open AI Studios chat playground the image shows Azure open eyes chat playground interface where users can test and configure an AI chat bot in the middle there's a chat area to type user messages and see the assistant's replies on the left there's a menu for navigation and a section to set up the assistant including a reminder to save changes on the right adjustable parameters control the eyes response Behavior like length Randomness and repetition users into queries adjust settings and observe how the AI responds to fine-tune its performance so that's an overview of azure open AI Studio let's take a look at the pricing for the model models in Azure open AI service starting off with the language models we have GPT 3.5 Turbo with a context of 4K tokens cost 0.15 for prompts and 0.002 for completions per 1,000 tokens another version of GPT 3.5 turbo can handle a larger context of 16k tokens with PRT and completion costs increased up to 0.003 and 0.004 respectively gpt3 5 Turbo 11106 with a 16k context has no available pricing gp4 turbo and gp4 turbo Vision both have an even larger Contex size of 128k tokens but also have no listed prices the standard gp4 model with an 8K token Contex costs 3 cents for prompts and 6 cents for completions and a larger context version of gp4 with 32k tokens cost 6 cents for prompts and 12 cents for completions there are other models such as the base models fine-tuning models image models embedding models and speech models they all have their respective pricing but we won't be going through each of them in a lot of detail but essentially they are all on a paper use pricing model it could be payer hour or paper token and so on the higher quality the model the more expensive it will likely be so that's an overview of azure open AI Service [Music] pricing hey this is Andrew Brown from exam Pro and the next topic will be going over co-pilots co-pilots are a new type of computing tool that integrates with applications to help users with common tasks using generative AI models they are designed using a standard architecture allowing developers to create custom co-pilots tailored to specific business needs and applications co-pilots might appear as a chat feature beside your document or file and they utilize the content within the product to generate specific results creating a co-pilot involves several steps training a large language model with a vast amount of data utilizing services like Azure open AI service which provide pre-trained models that developers can either use as his refin tune with their own data for more specific tasks deploying the model to make it available for use within applications building co-pilots that prompt the models to generate usable content enabling business users to enhance their productivity and creativity through AI generated assistance co-pilots have the potential to revolutionize the way we work these co-pilots use generative AI to help with first draft information synthesis strategic planning and much more let's take a look at a few examples of co-pilot starting with Microsoft co-pilot so Microsoft co-pilot is integrated into various applications to assist users in creating documents spreadsheets presentations and more by generating content summarizing information and aiding in strategic planning it is used across Microsoft Suite of products and services to enhance user experience and efficiency next we have the Microsoft being search engine which which has an integrated co-pilot to help users when browsing or searching the Internet by generating natural language answers to questions by understanding the context of the questions providing a richer and more intuitive search experience Microsoft 365 co-pilot is designed to be a partner in your workflow integrated with productivity and communication tools like PowerPoint and Outlook it's there to help you craft effective documents design spreadsheets put together presentations manage emails and streamline other tasks GitHub co-pilot is tool that helps software developers offering real-time assistance as they write code it offers more than suggesting code Snippets it can help in Thoroughly documenting the code for better understanding and maintenance additionally co-pilot contributes to the development process by providing support for testing code ensuring that developers can work more efficiently and with fewer errors so that's an overview of [Music] co-pilot hey this is Andrew Brown from exam Pro and the next topic will be covering is prompt engineering prompt engineering is a process that improves the interaction between humans and generative AI it involves refining the props or instructions given to an AI application to generate higher quality responses this process is valuable for both the developers who create AI driven applications and the end users who interact with them for example developers May build a generative AI application for teachers to create multiple choice questions related to text students read during the development of the application developers can add other rules for what the program should do with the prompts it receives system messages prompt engineering techniques include defining a system message the message sets the context for the model by describing expectations and constraints for example you're a helpful assistant that responds in a cheerful friendly manner these system messages determine constraints and styles for the model's responses writing good prompts to maximize the utility of AI responses it is essential to be precise and explicit in your props a well structured propt such as create a list of 10 things things to do in Edinburgh during August directs the AI to produce a targeted and relevant output achieving better results zero shot learning refers to an AI model's ability to correctly perform a task without any prior examples or training on that specific task one shot learning involves the AI model learning from a single example or instance to perform a task here is an example of prompt engineering with a user query and system response so the user inputs can my camera handle the rainy season if I go to the Amazon rainforest next week some The Prompt engineering components could be the weather resistance feature check users equipment database rainforest climate data product specifications travel tips for photographers etc for the llm processing the AI system integrates the user question with data about the Amazon's climate specifically during the rainy season and the product information about the camera's weather resistance features it also references a database of the user's equipment to ensure it's talking about the correct item and may include travel tips that are useful for photographers heading to similar climate and the output results in your current camera model the proot markv is designed with a weather sealed body suitable for high humidity and Rain conditions which matches the expected weather in the Amazon rainforest for next week however for added protection during heavy rains consider using a rain cover next let's take a look at the prompt engineering workflow this image describes a simplified step process for working with AI models and prompt engineering why task understanding know what you want the AI to do two craft prompts write instructions for the AI three prompt alignment make sure instructions match what the AI can do for optimizing prompt improve the instructions for better AI responses five AI model processing the AI thinks about the instructions six generating output the AI gives an answer or result seven output refinement fix or tweak the I's answer hey to iterative improvement keep improving the instructions and answers so that's an overview of prompt engineering [Music] the next topic we'll be covering is grounding grounding impr prompt engineering is a technique used in large language models where you provide specific relevant context within a prompt this helps the AI to produce a more accurate and related response for example if you want an llm to summarize an email you would include the actual email text in the prompt along with a command to summarize it this approach allows you to Leverage The llm for tasks it wasn't explicitly trained on without the need for training the model so what's the difference between prompt engineering and grounding prompt engineering broadly refers to the art of crafting effective prps to produce the desired output from an AI model grounding specifically involves enriching prps with relevant context to improve the model's understanding and responses grounding ensures the AI has enough information to process the prop correctly whereas prop engineering can also include techniques like format style and the Strategic use of examples or questions to guide the AI the image outlines of framework for grounding options in prompt engineering within the context of large language models grounding options these are techniques to ensure llm outputs are accurate and adhere to responsible AI principles prompt engineering placed at the top indicating its broad applicability this involves designing prompts to direct the AI toward generating the desired output fine-tuning a step below in complexity where llms are trained on specific data to improve their task performance training the most resource intensive process at the triangle base suggesting it's used in more extensive customization needs llm Ops and responsible AI these foundational aspects emphasize the importance of operational efficiency and ethical standards across all stages of LL and application development so that's an overview of [Music] grounding hey this is Andrew Brown from exampro and in this demo we'll be going over a short demo on what you can do with co-pilot with gp4 on Microsoft bang so to get here you'll need to search for something like co-pilot Bing and click on the TR copilot and you should be able to access this page so on here you have some suggested or popular prompts that people commonly use such as create an image of a concept kitchen generate ideas for whacking new products how would you explain AI to a sixth grader WR python code to calculate all the different flavor combination as for my ice cream parlor and so on you can choose the conversation style ranging from more creative for more original and imaginative ideas more balanced or more precise for more factual information we'll be going with somewhere in the middle so more balanced just for this example on the bottom here you can type in any prompt you want so for example we can type something simple like summarize the main differences between supervised and unsupervised learning for the AI 900 exam you'll see that it will start generating an answer for you so for supervised learning data labeling in supervised learning the training data is pre-labeled with the correct output values and it provides other objectives and examples as well and for unsupervised learning No Labels unsupervised learning operates without labeled data it seeks to discover patterns structures or relationships within the raw data notice how it uses sources from the internet and if you want to learn more you can click on these links that it provides to directly go to the source of the information which is very convenient so let's quickly check one out and it seems like the information we got was pretty good and credible and on the bottom it also provides us some suggestions for follow-up questions you may want to ask in the future that is related to the previous PRT another cool feature of co-pilot is that it's integrated with dolly3 which is a image generation service so for example you can say something like create an image of a cute dog running through a green field on a sunny day so now you'll have to wait a little bit for it to generate the image that you described in your prompt and there we go we have an adorable little puppy running through the fields you also have the power to modify images if you're not satisfied with the result so they've provided a few options for you here so for example we can add a rainbow in the background change it into a cat or make the sky pink and purple let's try changing it to a cat so it's going to generate and change it from a dog to a cat and there we go it's now a cute little cat running through the field you could also write code using co-pilot so for example I can type in write a python function to check if a given number is prime it'll start generating a piece of code for me it can write code in multiple languages not just python so let's try out something with JavaScript let's try create a JavaScript function to reverse a string and of of course we'll need to wait for the code to generate so there we go here's our code for the function to reverse a string just as we asked for it so that's a really quick and general demo for co-pilot with [Music] gp4 hey this is Andrew Brown from exam Pro and in this follow along we're going to set up a studio with an Azure machine learning uh Service uh so that it will be the basis for all the follow alongs here so I want you to do is go all the way to the top here and type in Azure machine learning and you're looking for this one that looks like a science uh bottle here and we'll go ahead and create ourselves our machine learning uh studio and so I'll create a new one here and I'll just say um my studio and we'll hit okay and we'll name the workpace so we'll say my uh work workplace we'll maybe say ml workplace here uh for containers there are none so it'll create all that stuff for us I'll hit create and create and so what we're going to do here is just wait for that creation okay all right so after a short little wait there it looks like our studio set up so we'll go to that resource launch the studio and we are now in so uh there's a lot of stuff in here but generally the first thing you'll ever want to do is get yourself a notebook going so in the top left corner I'm going to go to notebooks and what we'll need to do is load some files in here now they do have some sample files like how to use uh Azure ml so if we just quickly go through here um you know maybe we'll want to look at something like uh Ms nist here and we'll go ahead and open this one and maybe we'll just go ahead and clone uh this and we'll just clone it over here okay and the idea is that we want to get this notebook running and so notebooks have to be backed by some kind of compute so up here it says no compute found and Etc so what we can do here and I'm just going to go back to my files oh it went back there for me but what I'm going to do is go all the way down actually I'll just expand this up here makes it a bit easier close this tab out but uh what we'll do is go down to compute and here we have our four types of comput so compute instances is when we're running notebooks compute clusters is when we're doing training and inference clusters is when we have uh a inference pipeline uh and then attach computer is bringing uh things like hdn sites or data bricks into here but for compute instances is what we need we'll get ahead and go new you'll notice we have the option between CPU and GPU GPU is much more expensive see it's like 90 cents per hour for a notebook we do not need anything uh super powerful notice it'll say here development on notebooks IDs lightweight testing here it says classical ml model training autom ml pipelines Etc so I want to make this a bit cheaper for us here uh because we're going to be using the notebook to run uh cognitive services and those cost next to nothing like they don't take much compute power uh and for some other ones we might do something a bit larger for this this is good enough so I'll go ahead and hit next I'm just going to say my uh notebook uh instance here we'll go ahead and hit create and so we're just going to have to wait for that to finish creating and running and when it is I'll see you back here in a moment all right so after a short little wait there it looks like our server is running and you can even see here it shows you you can launch in jupyter Labs Jupiter vs code R Studio or The Terminal but what I'm going to do is go back all the way to our notebooks just so we have some consistency here I want you to notice that it's now running on this compute if it's not you can go ahead and select it uh and it also loaded in Python 3.6 there is 3.8 right now it's not a big deal which one you use um but that is the kernel like how it will run this stuff now this is all interesting but I don't want to uh run this right now what I want to do is get those cognitive Services uh into here so what we can do is just go up here and we'll choose editors and edit in Jupiter lab and what that should do is open up a new tab here uh is it opening if it's not opening what we can do is go to compute sometimes it's a bit more responsive if we just click there it's the same way of getting to it um I don't know why but just sometimes that link doesn't work uh when you're in the notebook and what we can do is well we're in here now we can see that this is where uh this example project is okay um but what we want to do is get those cognitive services in here so I don't know if I showed it to you yet but I have a repository I just got to go find it it's somewhere on my screen um here it is okay so I have a repo called the free a a uh the free a it should be AI 900 I think I'll go ahead and change that or that is going to get confusing okay so what I want you to do here is um we'll get this loaded in so this is a public directory I'm just thinking there's a couple ways we can do it we can go and uh use the terminal to grab it what I'm going to do is I'm just going to go download the zip and this is just one of the easiest ways to install it and we need um to place it somewhere so here are my downloads and I'm just going to drag it out here okay and uh what we'll do is upload that there so I can't remember if it lets you upload entire folders we'll give it a go see if it lets us maybe rename this to the free a or AI 900 there we'll say open uh yeah so it's individual file so it's not that big of a deal but we can go and ahead and select it like that and maybe we'll just make a new folder in here we'll say this cognitive [Music] services okay and uh what we'll do here is just keep on uploading some stuff so we have assets so I have a couple loose Falls there and I know we have a crew oops we'll have crew oops is not as responsive um we want OCR uh I believe we have one called movie uh reviews so we'll go into OCR here and upload the files that we have so we have a few files there and we'll go back a directory here and I know movie uh reviews are just static files and we have an objects folder so we will go back here to objects and then we'll go back and to crew and we need a folder called Warf a folder called Crusher a folder called data and so for each of these we have some images I think we're on Warf right yep we are okay great so we will quickly upload all these well technically we don't really need to upload any of these well these images we don't but I'm going to put them here anyway I just remember that uh these we just upload directly to the service but because I'm already doing anyway I'm just going to put them here even though we're not going to do anything with them all right and so now we are all set up to do some cognitive services so I'll see you in the next video all right so now that we have our work environment set up what we can do is go ahead and get cognitive Services hooked up because um we need that service in order to interact with it because if we open up any of these you're going to notice we have a cognitive key and endpoint that we're going to need so what I want you to do is go back to your Azure portal and at the top here we'll type in cognitive Services now the thing is is that all these services are individualized but at some point they did group them together and you're able to use them through unified um key and API Point that's what this is and that's the way we're going to do it so we'll say add and uh it brought us to the marketplace so I'm just going to type in cognitive services and then just click this one here here and we'll hit create and uh we'll make a new one here I'm just going to call my uh Cog Services say Okay um I prefer to be in Us East I will leave in US West it's fine and so in here we'll just say my Cog services and if it doesn't like that I'll just put some numbers in there we go we'll do standard so we will be charged something for that let's go take a look at the pricing so you can see that the pricing is uh quite variable here but uh it's like you'd have to do a thousand transactions before you are build uh so I think we're going to be okay for billing uh we'll checkbox this here we'll go down below it's telling us about responsible AI notice uh sometimes services will actually have you checkbox it but in this case it just tells us there and we'll go ahead and hit create and I don't believe this took very long so we'll give it a second here yep it's all deployed so we'll go to this resource here and what we're looking for are our keys and end points uh and so we have two keys and two end points we only need a single key so I'm going to copy this endpoint over we're going to go over to Jupiter lab and I'm just going to paste this in here I'm just going to put it in all the ones that need it so this one needs one this one needs one this one needs one and this one needs one and we will show the key here I guess it doesn't show but it copies of course I will end up deleting my key before you ever see it but this is something you don't want to share publicly and usually you don't want to embed Keys directly into a notebook but uh this is the only way to do it so it's just how it is with Azure um so yeah all our keys are installed going back to the cognitive Services uh nothing super exciting here but but it does tell us what services work with it you'll see there's an aster beside custom Vision because we're going to access that through another app um but uh yeah cognitive servic is all set up and so that means we are ready to uh start doing some of these Labs [Music] okay all right so let's take a look here at computer vision first and computer vision is actually used for a variety of different Services as you will see it's kind of an umbrella for a lot of different things but the one in particular that we're looking at here is to describe image in stream if we go over here to the documentation this operation generates description of image in a human reable language with complete sentences the description is based on a collection of content tags which also returned by the operation okay so let's go see what that looks like in action so the first thing is is that um we need to install this Azure cognitive Services Vision computer vision now we do have a kernel and these aren't installed by default they're not part of the um uh machine learning uh the Azure machine learning uh SDK for python I believe that's pre-installed but uh these AI services are not so what we'll do is go ahead and run it this way and you'll notice where it says pip install that's how it knows to install and once that is done we'll go run our requirements here so we have the OS which is for usually handling op like OS layer stuff we have matte matte plot lib which is to visually plot things and we're going to use that to show images and draw borders we need to handle images is I'm not sure if we're using numpy here but I have numpy loaded and then here we have the Azure cognitive Services Vision computer vision we're going to load the client and then we have the credentials and these are generic credentials for the cognitive Services credentials it's commonly used for most of these services and some exceptions they the apis do not support them yet but I imagine they will in the future so just notice that when we run something it will show a number if there's an aster it means it hasn't ran yet so I'll go ahead and hit play up here so it goes an aster and we get a two and we'll go ahe head and hit play again and now those are loaded in and so we'll go ahead and hit play okay so here we've just packaged our credentials together so we passed our key into here and then we'll now load in the client uh and so we'll pass our endpoint and our key okay so we hit play and so now we just want to load our image so here we're loading assets data.jpg let's just make sure that that is there so we go assets and there it is and we're going to load it as a stream because you have to pass streams along so we'll hit play and you'll see that it now ran and so now we'll go ahead and make that call okay great and so we're getting some data back and notice we have some properties person wall indoor man pointing captions it's not showing all the information sometimes you have to extract it out but we'll take a look here so uh this is a way of showing mat pla lib in line I don't think we have to run it here but I have it in here anyway and so what it's going to do is it's going to um show us the image right so it's going to print us the image and it's going to grab whatever caption is returned so see how there's captions so we're going to iterate through the captions it's going to give us a confidence score saying it thinks it's this so let's see what it comes out with okay and so here it says Brent spider Spiner looking at a camera so that is the actor who plays data on Star Trek has a confidence score of 57. 45% even though it's 100% correct uh they probably don't know contextual things like um uh in the sense of like pop culture like they don't know probably start Tre characters but they're going to be able to identify celebrities because it's in their database so that is um uh uh the first introduction to computer computer vision there but the key things you want to remember here is that we use this describe an image stream uh and that we get this confidence score and we get this contextual information okay and so that's the first one we'll move on to um maybe custom Vision next all right so let's take a look at custom Vision so we can do some um classification and object detection so um the thing is is that it's possible it's possible to launch custom Vision through the marketplace so if we go we're not going to do it this way if you type in custom Vision it never shows up here but if you go to the marketplace here and type in custom vision and you go here you can create it this way but the way I like to do it I think it's a lot easier to do is we go up the top here and type in custom vision. and you'll come to this website and what you'll do is go ahead and sign in it's going to connect to your Azure account and once you're in you can go ahead here and create a new project so the first one here is I'm just going to call this the Star Trek crew we're going to use this to identify different Star Trek members we'll go down here and uh we haven't yet created a resource so we'll go create new my custom Vision resource we'll drop this down we'll put this in our Cog services uh we'll go stick with um Us West as much as we can here we have fo Ando fo is blocked up for me so just choose so I think fo is the free tier but I don't get it and um once we're back here we'll go down below and choose our standard and we're going to have a lot of options here so we have between classification and object detection so classification is when you have an image and you just want to say what what is this image right and so we have two modes where we can say let's apply multiple labels so let's say there were two people in the photo or whether there was a dog and cat I think that's example that use a dog and a cat or you just have a single class where it's like what is the one thing that is in this photo it can only be of one of the particular categories this is the one we're going to do multic class and we have a bunch of different domains here and if you want to you can go ahead and read about all the different domains and their best use case but we're going to stick with A2 this is optimized for so that it's faster right and that's really good for our demo so we're going to choose General A2 I'm going to go ahead and create this project and uh so now what we need to do is start labeling our our our content so um what we'll do is I just want to go ahead and create the tags ahead of time so we'll say Warf we'll have uh data and we'll have Crusher and now what we'll do is we'll go ahead and upload those images so you know we uploaded the jupyter notebook but it was totally not necessary so here is data because we're going to do it all through here and we'll just apply the data tag to them all at once which saves us a lot of time I love that uh we'll upload now uh war and I don't want to upload them all I have this one quick test image we're going to use to make sure that this works correctly and I'm going to choose Warf and then we'll go ahead and add Beverly there she is Beverly Crusher okay so we have all our our images in I don't know how this one got in here but it's under worth it works out totally fine so uh what I want to do is uh go ahead and train this small because they're all labeled so we have a ground truth and we'll let it go ahead and train so we'll go and press train and we have two options quick training or Advanced Training Advanced Training where we can increase the time for better accuracy but honestly uh we just want to do quick training so I'll go ahead and do quick training and it's going to start its iterative process notice on the left hand side we have probability threshold the minimum probability score for a prediction to be valid when calcul calculating precision and recall so we uh the thing is is that if it doesn't at least meet that requirements it will quit out and if it gets above that then it might quit out early just because it's good enough okay so training doesn't take too long it might take 5 to 10 minutes I can't remember how long it takes but uh what I'll do is I'll see you back here in a moment okay all right so after waiting a short little while here looks like our results are down we get 100% um match here so these are our evaluation metrics to say whether uh the model was uh uh achieved its actual goal or not so we have Precision recall and I believe this is average Precision uh and so it says that it did a really good job so that means that it should have no problem um matching up an image so in the top right corner we have this button that's called quick test and this is going to give us the opportunity to uh quickly test these so what we'll do is browse our files locally here and uh actually I'm going to go to uh yeah we'll go here and we have Warf uh and so I have this quick image here we'll test and we'll see if it actually matches up to bwf and it says 98.7% Warf that's pretty good I also have some additional images here I just put into the repo to test against and we'll see what it matches up because I thought it'd be interesting to do something that is not necessarily uh them but it's something pretty close to um you know it's pretty close to what those are okay so we'll go to crew here and first we'll try Hugh okay and Hugh is a borg so he's kind of like an Android and so we can see he mostly matches to data so that's pretty good uh we'll give another one go Marto is a Klingon so he should be matched up to Warf very strong match to Warf that's pretty good and then palaski she is a doctor and female so she should get matched up to Beverly Crusher and she does so this works out pretty darn well uh and I hadn't even tried that so it's pretty exciting so now let's say we want to go ahead and well if if we wanted to um make predictions we could do them in bulk here um I believe that you could do them in bulk but anyway yeah I guess I always thought this was like I could have swore yeah if we didn't have these images before I think that it actually has an upload option it's probably just the quick test so I'm a bit confused there um but anyway so now that this is ready what we can do is go ahead and publish it uh so that it is publicly accessible so we'll just say here a crew model okay and we'll drop that down say publish and once it's published now we have this uh public out so this is an endpoint that we can go hit pragmatically uh I'm not going to do that I mean we could use Postman to do that um but my point is is that we've basically uh figured it out for um classification so now that we've done classification let's go back here to uh the vision here and let's now let's go ahead and do object detection [Music] okay all right so we're still in custom Vision let's go ahead and try out object detection so object detection is when you can identify a particular items in a scene um and so this one's going to be combadge that's what we're going to call it because we're going to try to detect combadge we have more domains here we're going to stick with the general A1 and we'll go ahead and create this project here and so what we need to do is add a bunch of images I'm going to go ahead and create our tag which is going to be called combadge uh you could look for multiple different kinds of labels but then you need a lot of images so we're just going to keep it simple and have that there I'm going to go ahead and add some images and we're going to go back um a couple steps here into our objects and here I have a bunch of photos and we need exactly 15 to train so we got one two 3 4 5 6 7 8 9 10 11 12 13 14 15 16 and so I threw an additional image in here this is the badge test so we'll leave that out and we'll see if that picks up really well and yeah we got them all here and so we'll go ahead and upload those and we'll hit upload files okay and we'll say done and we can now begin to label so we'll click into here and what I want to do if you hover over it should start detecting things if it doesn't you can click and drag we'll click this one they're all com badges so we're not going to tag anything else here okay okay so go here hover over is it going to give me the combadge no so I'm just draag clicking and dragging to get it okay okay do we get this combadge yes do we get this one yep so simple as that okay it doesn't always get it but uh most cases it does okay didn't get that one so we'll just drag it out okay it's not getting that one it's interesting like that one's pretty clear but uh it's interesting what it picks out and what does what it does not grab eh so it's not getting this one probably because the photo doesn't have enough contrast and this one has a lot hoping that that gives us more data to work with here yeah I think the higher the contrast it's easier for it to uh um detect those it's not getting that one it's not getting that one okay there we go yes there are a lot I know I have some of these ones that are packed but there's only like three photos that are like this yeah they have badges but they're slightly different so we're going to leave those out oops I think it actually had that one but we'll just tag it anyway and hopefully this will be worth the uh the effort here there we go I think that was the last one okay great so we have all of our tagged photos and what we can do is go ahead and train the model same option quick training Advanced Training we're going to do a quick training here and notice that the options are slightly different we have probably threshold and then we have overlap threshold so the minimum percentage of overlap between predicted bounding boxes and ground truth boxes to be considered for correct prediction so I'll see you back here when it is done all right so after waiting a little bit a while here it looks like um it's done it's trained and so Precision is at 75% so Precision the number will tell you if a tag is predicted uh by your model How likely that it's likely to be so how likely did it guess right then you have recall so the number will tell you out of the tags which should be predicted correctly what percentage did your model correctly find so we have 100% uh and then you have mean average Precision this number will tell you the overall object detector performance across all the tags okay so what we'll do is we'll go ahead and uh do a quick test on this model and we'll see how it does I can't remember if I actually even ran this so it'll be curious to see the first one here um it's not as clearly visible it's part of their uniform so I'm not expecting to pick it up but we'll see what it does it picks up pretty much all of them with exception this one is definitely not a comp badge but uh that's okay only show suggests obious the probabilities above the selected threshold so if we increase it uh we'll just bring it down a bit so there it kind of improves it um if we move it around back and forth okay so I imagine via the API we could choose that let's go look at our other sample image here um I'm not seeing it uh where did I save it let me just double check make sure that it's in the correct directory here okay yeah I saved it to the wrong place just a moment um I will place it just call that bad test two one second okay and so I'll just browse here again and so here we have another one see if it picks up the badge right here there we go so looks like it worked so uh yeah I guess custom vision is uh pretty easy to use and uh pretty darn good so what we'll do is close this off and make our way back to our Jupiter labs to move on to um our our next uh lab here [Music] okay all right so let's move on to the face service so just go ahead and double click there on the left hand side and what we'll do is work our way from the top so the first thing we need to do is make sure that we have the computer vision installed so the face service is part of the computer vision API and once that is done we'll go ahead and uh do our Imports very similar to the last one but here we're using the face client we're still using the Cog cognitive service credentials we will populate our keys we make the face client and authenticate and we're going to use the same image we used um uh prior with our computer vision so the data one there and we'll go ahead and print out the results and so we get an object back so it's not very clear what it is but here if we hit show okay here it's data and it's identifying the face ID so going through this code so we're just saying open the image we're going to uh set up our figure for plotting uh it's going to say well how many faces did it detect in the photo and so here it says detected one face it will iterate through it and then we will create a bounding box around the images we can do that because it returns back the face rectangle so we get a top left right Etc and uh we will draw that Wrangle on top so we have magenta I could change it to like three if I wanted to uh I don't know what the other colors are so I'm not even going to try but yeah there it is and then we annotate with the face ID that's the unique identifier for the face and then we show the image okay so that's one and then if we wanted to get more detailed information like attribute such as age emotion makeup or gender uh this resolution image wasn't large enough so I had to find a different image and and do that so that's one thing you need to know as if it's not large enough it won't process it so we're just loading data large very similar process but it is uh the same thing detect with stream but now we're passing in um return face attributes and so here we're saying the attributes we want uh and there's that list and we went through it in the lecture content and so here we'll go ahead and run this and so we're getting more information so that magenta line is a bit hard to see I'm just going to increase that to three okay still really hard to see but that's okay so approximate age 44 I think the actor was a bit younger than that uh uh data technically is male presenting but he's an Android so he doesn't necessarily have a gender I suppose he actually is wearing a lot of makeup but all it detects is it I guess it's only particular on the lips and the eyes so it says he doesn't have makeup so maybe there's a color you know like ey Shadow stuff maybe we would detect that in terms of personality I like how it's he's a a 002 Point per SB but he's neutral right uh so just going through the code here very quickly so again it's the number of faces so it detected one face uh and then we draw a bounding box around the face for the detected attributes it's uh return back in the data here so we just say get the pH attributes turn it into a dictionary and then we can just uh get those values and uh iterate over it so that's as complicated as it is um and so there we [Music] go all right so we're on to uh our next cognitive service let's take a look at form recognizer all right and so form recognizer uh it tries to identify um like forums and turns them into readable things and so they have one for uh receipts in particular so at the top finally we're not using um the computer uh computer vision we actually have a different one so this one's Azure AI form recognizer so we'll run that there but this one in particular isn't up to date in terms of using it like um notice all the other ones they're using uh the cognitive service credential so for this we actually had to use the Azure key uh credential which was annoying I tried to use the other one to be consistent um but I I couldn't use it okay so what we'll do is run our keys like before we have a client very similar process and this time we actually have a receipt and so we have begin recognize receipt so it's going to analyze the receipt information and then it's what it's going to do is show us the image okay just so we have a reference to look at now the image isn't actually yellow it's a white background I don't know why when it renders out here it does that but that's just what happens and uh it even obscures the server name I I don't know why um but anyway if we go down below um this is return results up here right so we got our results and so if we just print out uh the results here we can see we get a recognized form back we get fields and some additional things and if we go into the uh the fields itself we see there's a lot more information if you can make out like here it says Merchant phone number form field label value and there's the number 512707 so for these things here like um the receipts if we can just find the API quickly here it has predefined Fields I'm not sure um yeah business card Etc um like if we just type in merchant I'm just trying to see if there's a big old list here it's not really showing us a full list but these are are predefined um things that are returned right so they've defined those uh maybe it's over here there we go so these are the predefined ones that extracts out so we have uh receipt type Merchant name etc etc and so if we go back to here you can see um I I have the field called Merchant name so we hit there it says Alm draft out Cinema let's say we want to try to get that balance maybe we can try to figure out which one it is I never ran this myself when I I made it so we'll see what it is but here it has total price what's interesting is that these this has a space so it's kind of unusual ual you think it'd be together but let's see if that works okay doesn't like that maybe that's just a typo on their part okay so we get none uh let's try price see what it picks up nope nothing um we know that the phone number is there so we'll give the phone number there we go so you know it's an okay service but uh you know uh you know you're you're mileage will vary based on uh what you do there maybe we could try total because that makes more sense right uh yeah there we go okay great so yeah it is pulling out the information um and so that's pretty much all you need to know about that service there [Music] okay let's take a look at some of our OCR capabilities here uh and I believe that's in computer vision so we'll go ahead and open that up at the top here we'll install computer vision as we did before very similar to the other computer vision task but this time we have a couple of ones here that'll explain that as we go through here we'll load our keys we'll do our credentials we'll load the client okay and then we have this um function here called printed text so what this function is going to do is it's going to uh print out the results of whatever text it processes okay so the idea is that we are going to feed in an image and it's going to give us back out the text for the the image so we'll run this function and I have two different images cuz I actually ran it on the first one and the results were terrible and so I got a a second image and it was a bit better okay so we'll go ahead and run this it's going to show us the image okay and so this is the photo and it was supposed to extract out Star Trek the Next Generation but because of the artifacts and size of the image we get back uh not English okay and so you know maybe a high resolution image it would have um a better a better time there um but that is what we got back okay so let's go take a look at our second image and see how it did and this one I'm surprised it actually extracts out a lot more information you can see really has a hard time with the Star Trek font but we get Deep Space 9 Nana Visitor tells all Life Death some errors here so it's not perfect um but you know you can see that it does something here now there is the O this is like for OCR where we have like for very simple images and text this is where we use the recognized printed text in stream but uh if we were doing this for larger amounts of text and we want to do this uh want this analyzed asynchronously then we want to use the read API and it's a little bit more involved um so what we'll do here is load a different image and this is a script we'll look at the image here in a moment um but here we read in stream and we create these operations okay and what it will do is it will asynchronous asynchronously send all the information over okay uh so I think this is supposed to be results here minor typ and um we will go ahead and give that a run okay and so here you can see it's extracting up the image if we want to uh uh see this image I thought I uh I thought I showed this image here but I guess I don't yeah it says plot image here to show us the image uh path it's up here it doesn't want to show us it's funny because this one up here is showing us no problem right um well I can just show you the image it's not a big deal but I'm not sure why it's not showing up here today so if we go to our assets here I go to OCR uh I'm just going to open this up it's opening up in Photoshop and so this is what it's transcribing okay so this is a thing this is like a guide to Star Trek where they talk about like you know what what makes St Trek Star Trek so just looking here it's actually pretty darn good okay but like read API is a lot more uh efficient because it can work uh umly and so when you have a lot of text that's what you want to do okay um and like it's feeding in each individual line right so that it can be more effective that way um so let's go look at some hand Rd and stuff so just in case the image doesn't pop up we'll go ahead and open this one and so this is a a a handwritten note that uh William Shatner wrote to a fan of Star Trek and it's basically incomprehensible I don't know if you can read that here but see was very something he was something hospital and healthy was something he was something I can't even read it okay so let's see what uh the machine thinks here and uh it says image path yeah it's called path let's just change that out go ahead and run that and run that there and we'll go ahead and run it and here we get the image so uh poner us very sick he was the hospital his Bey was Etc beat nobody lost his family knew Captain Halden so reads better than how I could read it honestly like it is it's really hard right like if you looked at this like that looks like difficult was Beady healthy I could see why it's guessing like that right dying that looks like dying to me you know what I mean so you it's just poorly hand handwritten but I mean it's pretty good for what it is so uh yeah there you [Music] go all right so let's take a look at another cognitive service here and this one is text analysis and uh so what we'll do is install the Azure cognitive Services language uh text analytics here so we go ahead and hit run all right and once that's uh installed uh this one is using the cognitive Services credentials so it's a little bit more standard with our other ones here we'll go ahead and run that there uh we'll make our credentials load our client and this one what we're going to do is try to determine sentiment and understand why people like a particular movie or not so I've loaded a bunch of reviews um they are again I can show you the data if it helps uh and so I'm just trying to find my right folder here and so if we go back look our movie reviews here's like a a review someone wrote first Contact just works it works as a rousing chapter in the Star Trek to less extent it works as a mainstream entertainment so different reviews for Star Trek first Contact which was a a very popular movie back in the day um so what we'll do is we will load uh the reviews so it's just iterating through the text files and showing us what the reviews are so here we can see all the ridden text had a lot of trouble getting the last one to display but it does get loaded in and so here we're using the the the um text analysis to show us uh key phrases because maybe that would give us an indicator and so that's the object back but maybe that give us an indicator as to like what people are saying as important things so here we see Borg ship Enterprise smaller ship escapes neutral zone travels contact damage uh co-writer Beautiful Mind sophisticated science fiction best whales Leonard neoy okay uh wealth of unrealized potential uh filmmaker John fr s okay so very interesting stuff as it here Borg ship again you've seen Borg ship a lot so that is kind of key phrases let's go get uh C or customer sentiment or how people felt about it did they like it or not and so here we just call sentiment and um what we'll do is if it's uh above five then it's positive and it's below five then it's a negative review I think most people uh thought it was a very good film uh so this one says it's pretty low nine so let's go take a look at that one uh it wasn't actually showing rendered there so maybe we'll have to open it up manually see if that's actually accurate it's empty so there you go I guess we had a blank one in there um I must have forgot to paste it in but that's okay uh that's a good indicator that uh you know that's what happens if you don't have it so let's look at number one then which is uh well actually this one is nine this is 04 this one here is eight so we'll open up eight when the Borg launched on Earth the Enterprise is sent to the neutral zone etc etc however a smaller ship escapes travels the Enterprise follows back um meanwhile the survivors so like this is a synopsis it doesn't say whether they like it or they don't but it was just 04 I I guess so there's nothing positive about it right um if we look at one that was this one's pretty low which is no no it's not it's one so it seems like this person probably really liked it or no I guess that's actually pretty low because it's one it's not nine Nine's very high let's take a look at this one review number two uh if we go up here the doo has improved the story mon turn the show but there's a wealth of unrealized potential so that's a fair one saying they maybe they don't like it as much I don't know if they give it two stars right we could probably actually correlate it with the actual results because I did get these off of IMDb and Rotten Tomatoes but uh yeah there you go that is Tex [Music] analysis all right so now we're on to Q&A maker and so we're not going to need to do anything pragmatically because Q&A maker is all about no code or low code to build out a questions and answers uh uh bot service so what we'll do is go all the way up here and I want you to type in Q andm maker. a because as far as I'm aware of it's not accessible through the portal sometimes you can find these things um again if we go to the marketplace I'm just curious I'm going just take a look here really quickly uh whenever it decides to log Us in here okay great so I'll go over to Marketplace and probably if we typed in Q&A maybe we'd see something here Q&A yeah so we go here um give it a second here seems like Azure is a little bit slow right now usually varies fast but uh you know the service varies well it's not loading for me right now but that's okay because we're not going to do it that way anyway um so uh again go to Q&A maker. and what I want you to do is go all way to the top in the right corner and we'll hit sign in and what we'll be doing is connecting via our single sign on with our account so it already knows I have an account there I'm going to give it a moment here and I'm going to go ahead and just give it a second there we go so it says I don't have any um knowledge bases which is true so let's go ahead and create ourselves a new knowledge base and here we have the option between stable and preview I'm going to stick with stable because I don't know what's in preview I'm pretty happy with uh that and so we need to connect uh Q&A Service uh uh Q&A service to our knowledge base and so back over here in Azure actually I guess we do have to make one now that I remember we actually have to create a Q&A maker service so I'll go down here and put this under my Cog services we'll say my um Q&A Q&A service might complain about the name uh yep so I'll just put some numbers here we'll pick uh free tier sounds good so I'll go free when I actually get the option that's what I will choose um down below we'll choose free again usse sounds great to me uh it generates out the name it's the same name as here so that's fine uh we don't need app insights but I'm going to leave it enabled because I think it changes it to standard or s zero when you uh do not um have it enabled unusually and so we will create our Q&A maker service give it a moment here and it says I remember it will say like even if you try it might have to wait 10 minutes for it to create the service so even though even after it's provisioned um it'll take some time so what we should do is prepare our doc because it can take in a variety different files and I just want to show you here that uh the Q&A they have a whole paper here formatting the guidelines and basically it's pretty smart about knowing where headings and answers is so for unstructured data we just have a heading and we have some text so let's write some things in here that we can think of since we're all about certification we should write some stuff here so how many adus certifications are there I believe right now there are uh 11 uh adus certifications okay and maybe if we use our headings here this would probably be a good idea here y okay another one could be um how many F fundamental Azure certifications are there and uh we'll give this a heading we'll say um there are three Azure I think there's three there's other ones right like Power Platform stuff but just being Azure specific there are three Azure uh fundamental certifications certification so we have um the dp900 the AI 900 um the a900 I guess there's four there's the sc900 right so there are four okay we'll say which is the hardest um Azure assoc Azure Association certification and uh what we'll say here is I think I mean it's my my opinion is it's the Azure administrator had some background noise there that's why I was a bit pausing there but the Azure administrator a 104 I would say that's the hardest uh which is harder um the uh adabs or Azure certifications I would say uh Azure certifications are harder because they uh check uh exact steps for implementation where AWS focuses on Concepts okay so we have a bit of a um knowledge base here so I'll save it and assuming that this is ready because we need a little bit time to put this together we'll go back to q a get hit a refresh here give it a moment drop it down choose our service and uh notice here that we have chitchat extraction and only extraction we're going to do chitchat I will say uh my or this will be uh the reference name you change any time this will be like uh uh certification Q&A and so here we want to populate so we'll go to files here I'm going to go to my desktop and here it is I'll open it we will choose professional tone go ahead and create that and so I'll see you back here moment all right so after waiting a short little time here it loaded in our data so you can see that it it figured out which is the question which is the answer and also has a bunch of default so here if somebody was asked something very s uh silly like can you cry I'll say I don't have a body it has a lot of information pre-loaded for us which is really nice if we wanted to go ahead and test this we could go and say um we'll go here and then we'll write in uh we say like hello I say boring says good morning okay so we'll say um how many uh certifications are there we didn't say AWS but let's just see what happens and so it kind of inferred even though we didn't say AWS in particular so and notice that there's ads and Azure so how many fundamental Azure certifications things like that and so it chose AWS so it's not like the perfect service but it's pretty good I wonder what would happen if we um placed in uh one that's like Azure I don't know how many Azure Sears there are we'll just say like there's 11 12 I can't never remember they're always adding more but uh it I want to close this here there we go so let's just go add a new key pair here and we'll say how many Azure [Music] certification are there I should have said certifications I'll probably just set one moment so there there are 12 Azure certifications who knows how many they have they could like 14 or something we could say like between 11 and 14 they just add them they just update them too frequently I can't keep track so uh we'll go here and we'll just say certifications and we will save and retrain so we'll just wait here a moment great and so now we go ahead and test this again so we'll say how many certifications are there and see it's pulling the first answer if I say uh Azure if it's see if it gets the right one here how many Azure certifications are there okay so you know uh maybe you'd have to say you'd have to have a generic one for that match so if we go back here and we say how many certifications are there you say uh you know like uh uh which certification uh uh which Ser cloud service provider here we got ads Azure uh prompt you can use Guides Through conversational flow prompts are used to link Q&A Pairs and can be displayed um I haven't used this yet but I mean it sounds like something that's pretty good um because there is multi-turn in this so the idea is that if you had to go through multiple steps you could absolutely do that um we try this a little bit here uh follow prompt you can use the guide use convert prompts are used to link Q&A pairs together texture button for suggested action oh okay so maybe we just do like AWS link to Q&A and then so search an existing Q&A or create a new one um so it say like how many eight of us okay we're typing it in context only this Falls up will not be understood out of the context flow sure because it should be within context right and uh here we can do another one we say like um Azure we'll say how many azure context only oops it uh got away from me there we'll save that and uh what we'll do is save and train so we go back here and we'll say how many uh certifications are there enter so we have to choose AWS and so there we go so we got something that works pretty good there since I'm happy with it we can go ahead and go and publish that so we's say publish and now that it's published we could use Postman or curl to uh trigger it but what I want to do is create a bot because with Azure bot Services then we can actually utilize it um with other IND ations right it's a great way to uh um use your Bot or to actually host your Bot so we'll go over here it'll link it over uh if you don't click it it doesn't preload it in so it's kind of a pain if you lose it you have to go back there and click it again but uh let's just say um certification q and day and we will look through here so all going to go with free premium messages 10K 1K premium message units messages I'm kind of confused by the pricing but F0 usually means free so that's what I'm going to go for that SDK or nodejs I'm going to use NOS not that we're going to do anything there with it go ahead and create that and I don't think this takes too long we'll see here and just go ahead and click on that there I'll just wait here a bit I'll see you back here in a moment all right so after waiting I don't know about 5 minutes there it looks like our bot service is deployed we'll go to that resour there uh you can download the bot source code actually never did this uh so I don't know what it looks like so be curious to see this um just to see what the code is I assume that because we Cho chose nodejs it would give us um that as the default there so download your c as you bought creating the source zip not sure how long this takes might be regretting clicking on that but uh what we'll do is we'll go on the left hand side here to channels because I just want to show uh here yeah I don't not didn't download uh we'll try here in a second but um what we'll do is we'll go back po profile uh unspecified bot what are you talking about yeah maybe it needs some time so you know maybe we'll just give the bot a little bit of time here I'm not sure why it's giving us a hard time because this bot is definitely deployed if we go over to our bot right bought Services it is here sometimes there's like latency you know with uh Azure oh there we go okay see it works now fine right and so I want to show you that there's different channels and these are just easy ways to integrate your Bot into different services so whether you wanted to use it with Alexa group me Skype telepon twilio Skype business apparently they don't have that anymore because I got s teams now right uh keik which I don't know people still use that slack we should had Discord telegram Facebook email um that's kind of cool but teams teams is a really good one I use teams uh there's a direct line Channel I don't know what that means and there's web chat which is just having like an ined code so if we go over we can go and test it over here just testing our web chat and so it's the same thing as before but we just say things like uh um how many certifications are there let Azure and get a clear answer back we'll go back up to our overview let's try to see if we can download that code again I was kind of curious uh what that looks like if it will download must be a lot of code eh there we go so now we can hit download and so there is the code I'm going to go ahead and open that up uh so yeah I guess when we chose JavaScript that made a lot more sense let's give it a little peek here I'm just going to uh drop this on my desktop here so let make a new folder here and call this uh bot code okay I know you can't see what I'm doing here but uh let's go here and d double click into here and then just drag that code on in and then what we can do is open this up in VSS code I should have VSS code running somewhere around here just going to go ahead and open that I'm off screen here I'll just show you my screen in a moment say show code oops file open folder bot code okay and uh we'll come all the way back here and so we got a lot of code here never looked at this before but you know I'm a pretty good programmer so it's not too hard for me to understand um so looks like you got API request things like that I guess it would just be like if you needed to integrate into your application then it kind of shows you all the code there I'm just trying to see our dialogue choices nothing super exciting okay you know when I go and make the um was it the AI or the AI 100 whatever the data scientist course is I'm sure I'll be a lot more thorough here but I was just curious as to what that looks like now if we wanted to have an easy integration uh we can get an M code for this so if we go back to our channels I believe uh we can go and is it edit ah yes so here we have a code so what I'll do is go back to jupyter Labs I'm just going to go make a new empty um notebook so we'll just go up here and say notebook and this can be for our Q&A doesn't really matter what kernel uh we'll say Q and A maker just to show like if you wanted a very very simple way of integrating your Bot um we would go back over to wherever it is here ah here we are I'm going to go ahead and copy this iframe I think it's percentage percentage HTML so it treats this cell as HTML and I don't have any HTML to render so we will place that in there and notice we have to replace our secret key so I will go back here and I will show my key and we will copy that and we will paste that key in here and then we'll run this and I can type in here where am I just ask silly things uh who are you how many Azure certifications are there well I wonder if I just leave the are there off let's see if it figures it out okay cool so uh yeah I mean that's pretty much it with Q&A maker um so yeah that's great so I think we're done here and we can move on to uh checking out uh leis or Luis learning understanding to make a more uh robust bot okay [Music] all right so we are on to our last cognitive service and this one is going to be uh lwis or Louise depending on how you like to say it it's Luis which is language understanding so you type in luis. a uh and that's going to bring us up to this external website still part of um Azure just has its own domain and so here we'll choose our subscription and we have no author authoring source so I guess we'll have to go ahead and create one ourselves so go down here and we'll choose my cognitive Services asure resource name so my o uh service or my cognitive service create new cognitive service account but we already have one so I don't want to make another one right it should show up here right are valid in the author authoring region so it's possible that we're just in the incorrect region so we might end up creating two of these and that's totally fine I don't care it's as long as we get this working here because we're going to delete everything at the end anyway and so just say my Cog service 2 and uh we'll say West us because I think that maybe we didn't choose one of these regions let's go double check uh if we go back to our portal just the limitations of the service right so we'll go to my Cog Services here um I just want to go uh cognitive services so just want to see where this is deployed and this is in um you West us yes I don't know why it's not shown up there but whatever if that's what it wants we'll give it what it wants okay shouldn't give us that much trouble but hey that's how it goes and so we have an author authoring service I'm going to refresh here and see if it added a second one it didn't so all right that's fine so we'll just say uh my sample bot um we'll use English as our culture if nothing shows up here don't worry you can choose it later on I remember the first time I did this it didn't show up and so now we have my Cog service my custom vision service we want Cog service so um anyway it tells you about schema like how you make a schema animates talking about like bot action intent and example utterance but we're just going to set up something very simple here so we're going to create our attent the one that we always see is uh flight booking so I'll go here and do that and what we want to do is write an undering so like uh book May flight to Toronto okay and so if someone were to type that in then the idea is it would return back the intent this value and metadata around it and we could programmatically provide code right so what we need is identity identities and we can actually just click here and uh make one here so enter named identity we'll just call this location okay here we have an option machine learned and list if you flip between it this is like imagine you have a ticket order and you have these values that can uh change or you just have a value that always stays the same like list so that's our airport that makes sense we'll do that if we go over to ENT entities we can see it here all right so uh nothing super exciting there but what I want to show you is if we go ahead and um we should probably add fight booking should be uh how about book flight flight booking fight booking okay so we'll go ahead and I know there's only one but we'll go ahead and train our model because we don't need to know tons right we cover a lot in the lecture content uh to build a complex spot is more for the uh associate level um but now what we can do is go ahead and test this and we'll say book me a flight to Seattle okay and notice here it says book flight we can go inspect it and we get some additional data so top scoring so it says How likely that was the intent um okay so you get kind of an idea there there's additional things here it doesn't really matter um we'll go back here and we will go ahead and publish our model so we can put it into a production slot you can see we have sentiment analysis speech priming we don't care about either of those things we can go and see where our endpoint is and so now we have uh an endpoint that we can work with um so yeah I mean that's pretty much all you really need to learn about Lewis um but uh I think we're all done for cognitive services so we're going to keep around our our notebook because um we're going to still use our jupyter notebook for some other things things but what I want you to do is make your way over to um your resource groups because if you've been pretty clean it's all within here we'll just take a look here so we have our Q&A all of our stuff here I'm just making sure it's all there and so I'm just going to go ahead and delete this Resource Group and that should wipe away everything okay for the cognitive Services part all right so we're all good here and I'm just going to go off and I'll leave leave this open because it's always a pain to get back to it and reopen it but let's make our way back to the home here in the Azure uh machine Learning Studio and now we can actually explore building up machine learning [Music] pipelines okay so we are on to the ml uh uh follow alongs here so we're going to learn how to build some pipelines so first I think is the easiest would be Auto automated ml or also know as autom ml the idea here is it's going to just um build out the entire pipeline for us so we don't have to do any thinking we just say what kind of model we want to run and have it to make a prediction so what we'll do is a new automated ML and we're going to need a data set so I don't have one but the nicest thing is they have these open data sets so if you click here you'll see there is a bunch here and a lot of these you'll come across quite often not just on Azure but other places like this diabetes one I've seen it like everywhere okay uh and so like if we just go click here maybe we can read a bit more here so diabetes data set 422 samples with 10 features ideal for getting started with machine learning algorithms it's one of the popular pyit learn toy data sets it's probably where I've seen it before though it's not showing up there uh you scroll on down you can see the data uh you notice that it's available AZ your notebooks data bricks and Azure synapse uh the thing is we have these values so age sex BMI BP and the Y is trying to make a prediction it's trying to say what's the likelihood of you having diabetes or not and so it's not a Boolean value so it's not a binary classifier it's kind of on a uh well I guess you would be doing binary classif classification say do you have di diabetes or you can make a prediction to say what's the likelihood or this value if you gave another value in there but um anyway you this is the predicting value a lot of times this is X so everything here is X and this is considered y the actual prediction um so some sometimes it's why and sometimes it's actually named what it is uh but that's just what it is here so we'll close that off and so we'll choose the diabetes set and it will be data set one and so we'll worry about feedback later so we'll click on Sample uh diabetes we'll hit next and here it's going to try to figure out uh what kind of model that we want we have to create a new experiment it's a container to run the model in so we'll just say diabetes uh my diabetes it sounds a bit odd but that's what it is the target column we want to predict um is the train to predict is the Y It's usually the Y um we don't have a compute cluster so I'll go ahead and create a new compute we have dedicator or low priority technically we um it is low priority but I just want this done low priority but don't G to compute nodes your job may be pre- emptied um I'm going to stick with dedicated for the time being we're going to stick with CPU uh if we go with um this it does take about an hour to run so when I ran this took about an hour so if you don't mind it's only going to cost you 15 cents but if you want this done a lot sooner I'm going to try to do something a little bit more powerful so I'm just trying to decide here because if it only takes an hour uh I might run it on something more powerful that's 90 cents that might be Overkill because it's not really deep learning uh it's just statistical statistical stuff so try and large data set I wouldn't say it's large real time inference other latency sensitive ones um how about why is this one I'm just looking here because this one's 29 this one's more expensive but it has 32 GB of RAM this one is 28 oh 14 GB of RAM oh it's storage so this one's our highest in the tier again you can choose this one you you just have to wait a a lot longer I just want to see if it finishes a lot faster okay without having to go to the GPU level because I don't think GPU is going to help too much here um the computer name is uh my diabetes machine minimum number nodes uh you want to provision if you want dedicated nodes to set the count here uh maximum I guess I just want one node right uh we will go ahead and oops uh complete name be2 characters long what doesn't is it too long okay there we go we'll give it a moment here yeah it's going to spin up the cluster so it does take a little bit time to start this so I'll see you back here when this is done okay great so after a short little wait there it looks like uh our cluster is running if we double check it here we can go to compute I believe that shows up under here under the compute cluster so there it is notice it's slightly different this one shows you applications and this one is just size and Etc we can click in here see nodes and run times we'll go make our way back here uh and we'll go ahead and hit next and notice that I think it actually will select what it generally because it'll look at your prediction value maybe sample a bit of it and say okay you probably want a regression thing so to predict a continuous numeric values so the thing is that if it was a label like text or if it was just zero and one it probably would choose classification because it's um you saw our our y value was like a number that was all over the place it thinks it's regression so I think that's a good indicator uh uh there so let's go with regression you know but you might want it as a binary classifier but uh yeah it's another story there so it's uh as soon as we created it just started it didn't give us the option to say hey I want to start running it notice on this here it's going to do featurization so that means it's automatically going to select out features for us which is what we wanted to do it set up to do regression uh we have some configuration here so training time is 3 hours doesn't mean it's going to train for three hours but that's I guess it's timeout for it um you could set a metric uh score threshold so it has to meet at least this to be successful if it's not going to do it it probably would quit out early cross number Val or cross validations just make sure the data is good you can see blocked algorithm so tensor flow DNN tensor flow L regression if it was using NN so deep learning neural network I probably would have chose the GPU to see if it would go faster um look at the primary metric it's normalized root square uh root mean Square AED sometimes on the exam they'll actually ask you like what's the prim metric for this thing so it's good to uh take a look and see what they actually use for that I'll probably be sure to um highlight that stuff in the actual lecture content um but this will take some time to run uh we have data guard rails it will actually not populate I guess until We've ran it so so we'll just let it run and I'll see you back here when it's done okay all right so after a very very very long wait our automl job is done it took 60 minutes so using a larger instance didn't save me any time I don't know if maybe if I ran a GPU instance it would be a lot faster I'd be very curious to try that out but not something for uh uh this certification course so we go into here and yeah the cheaper instance was the same amount of time so it probably just needs gpus it really depends on the type of models it's running so we have a bunch of different algorithms in here it ran uh about 42 different models I thought last time I ran it I saw a lot more but you can see there's all kinds of models that it's running and then it's going to choose the top candidate so it chose voting Ensemble so Ensemble is um uh we don't cover really in the course because it's gets too much into ml but Ensemble is when you actually use two different weaker models and combine the results in order to make a more uh uh powerful uh ml model okay um so here we'll get some explanation I tried this before and I didn't get really good information so if we go here uh so like I don't have anything under model performance so this tab requires array of predicted values from the model to be supplied we didn't Supply any so we don't get any data Explorer so select a cohort of the data that all the data is is we have here um so like here we were seeing age and I guess it's just giving us an indicator about the age information um use the slider to show descending feature important select up to three cohorts to see the feature important SL by side okay so I guess S5 and BM I don't know what S5 is we'd have to look up the data set be BMI is your body mass index so that's a clear indicator as to what affects whether you have diabetes or not so that makes sense age doesn't seem to be a huge factor which is kind of interesting individual feature importance we can go here and just kind of like narrow in and say okay well why is this outlier over here and they're like age 79 right so that's kind of interesting to see that information so it does give you some uh explanation as to to you know why things are why they are um over here we have a little bit more different data this is kind of interesting model performance uh I don't know what I'm looking at but like here it's over mean squared so it's that uh mean squared calculation there again okay so yeah it's something right uh but anyway the point is is that uh that we finally get metric so I guess we always had to click there because that makes more sense um so yeah there's more values here sure data transformation uh illustrates the data processing feature engine scaling techniques and machine learning algorithm automl so you know if you were a real data scientist all this stuff would make sense to you um I think just with time it'll it'll make sense but even at this point I I'm not sure and I don't care about the model right if you're building something for real I'm sure uh the information becomes a lot more valuable so this model is done uh and the idea is that we can deploy oops if we go back to the actual uh models oh because we actually went into them e so we go back to the um autom ml here I think you can deploy any model that you like so I think you go here and deploy this like if you prefer a different model you could deploy it um if we go into Data guard rails we kind of skipped over that this is a way it does automatic featurization so it's extracting up the feature so it how it handles the splitting how it handles missing features high card anality is like if you have too much data it might have to do dimensionality reduction so that's just saying like hey if this is a problem maybe we would do some pre-processing or stuff to make it easier to work with the data so if we're happy with this we can go ahead and deploy it so let's say um deploy just say infer my diabetes here we have AKs and E uh um Azure container instance let's do Azure kubernetes uh kubernetes service cuz we did the other one here um say uh diabetes prodad maybe um AKs diabetes oh compute name sorry um one of the inference ones okay so in order to uh deploy this we would have to create our pipeline I'm not sure if I have enough in my quota here but let's go give it a go so I think what it's wanting is one of these here uh I I think we'd want this wherever we are right I'm not sure where we are If This Is Us East or uh West here let's go check studio um Azure machine learning East usest no I never did this when I was um I just use usually Azure container instance but I'm just curious here say next my uh diabetes prod we will we need to choose some nodes uh the number of nodes multiply by the virtual machine's number of cors must be greater or equal to 12 okay no again if you're not confident like if you're concerned about cost you can just again watch you don't have to do right um this is again a uh fundamental certification it's not super important to get all the hands-on experience yourself um but I'm just trying to explore this so we can see right because I I don't care about costs it's not a big deal to me on my machine here uh so probably I don't have Sy pool must use a VM SKU with more than two cores and four gigabytes well what did I choose did I not choose the right one uh we'll try this again oh I chose three yeah that's fair um uh what did it want 12 cores said before I think invalid parameters more details because it already exists based on that name a to it's given us all this trouble a this one we'll go ahead and delete you think like it wouldn't matter like I wouldn't have to delete it out but that's fine this one failed now what's the problem quota exceeded so I can't do it because I don't I'd have to go make a support request in reset so it's not a real big deal um I guess what we could do is instead of doing it on AKs we could just deploy to container instance if it'll let us um notice I don't have to fill anything additional in it'll just deploy I think great uh and so I guess we'll let that deploy and I'll see you back here in a bit okay all right so I'm back here checking on out on my or checking up on my automl here so we go over to Compu cute we go to inference clusters we don't have anything under there if we go uh over to our experiments under our diabetes here because we did choose to deploy the model right we clicked deploy so it should have created an ACI instance let's make our way over to the portal the reason why it might not be sh up is because I'm just running out of compute because again it's a quota thing um it's not a big deal for us to get a deploy it's not like we're going to do anything with it but uh yeah so we can see that we have a container over here and it's running so we must be able to uh see if we go to endpoints here ah here it is right I was under models that's my problem uh so pipeline endpoints that would be something I I think that if we had deployed our designer I thought we would have saw it under there but here we have our binary pipeline or our diabetes prod pipeline so if we wanted to like test data you know we could pass stuff in here um I think if we wanted to kind of just like see this in action I'm not sure if it's going to work but we'll give it a go so if we go into our sample diabetes data set and we just explore some of the data we should be able to kind of Select out some values because I I don't know what these values mean so let's just say like 36 oops 36 but we already know that BMI is the major factor here uh sex is either one or two so we'll say two BMI will say 25.3 the BP will be 83 or whatever oops 83 here S1 160 S2 can be 99.6 uh three 45 45 and five [Music] 5.1 oh we only we're running out of metrics here uh 82 wonder why it doesn't give us all of them oh I guess it does it's up to six okay so let's go ahead and test that see what we get and we got a result back 168 so uh that is uh autom ml all complete there for you um yeah so there you [Music] go all right so let's take a look here at the uh visual designer because it's a great way to get started very easily uh with uh if you don't know what you're doing and you want something a little bit more advanced than automl and have some customization it's great to start with one of these samples let's go ahead and expand it and see what we have here we have binary classification with custom python script uh TB parameters for binary classification uh multiclass multi class classification so letter recognition text classification all sorts of things usually binary classification classification is pretty easy I'm looking for one that is pretty darn simple uh let's go take a look here so this says this sample shows how to filter base features selection to selection features um binary classification so how to predictors related to customer relationships using binary classes how to handle imbalance data sets using smot and modules I'm not really worried about balancing uh customized python script to perform cost sensitive binary classification tune parameters so you tune model parameters best models during the training process let's go with this one this one seems okay to me um and so what you can see here is that it's using a sample data set I believe I think this is a sample and if you wanted to see all of them you could literally drag them out here and do things with them uh I haven't actually uh built one uh end to end yet for uh for this again I don't think it's like super important for uh this level exam but uh this just shows you that there's a pre-built one if you've start to get the handle of ml you know the full pipeline this isn't too confusing so at the beginning here we have our classification data and then what it's going to do is say select columns in the data set so it says exclude column names work class occupation native country so it's doing some pre-processing excluding that data might be interesting to go look at that data set so if we go over to our data sets tab it should show up here I believe maybe because we haven't um uh uh committed or submitted this we we can't see that data set yet but we'll look at it for a moment then we want to clean our data so here it's saying clean all the columns so uh custom substitution value see if we can see what it's substituting out uh it's not saying what so clean missing data so I'm not sure what it's cleaning out there but because I would suggest that it's using some kind of custom script um I'm not sure where it is but that's okay we have split data pretty common to split your data so you would have a training and test data set uh it's usually really good to randomize it so you want to randomize it then split it um and that's that's just so you get better results then it has model hyperparameter tuning so the idea is that it's going to use ml to figure out the uh the best um parameters for tuning over here we have the two class decision tree where it's going to do some work there it's going to score our model and then it's going to evaluate our model and see if it's successful so this is all set up to go so all we got to do is go to the top here there's a setting wheel here and we need to choose some type of compute so I'm going to go here and we have this one here but I'm going to go create it's for my um my diabetes one but I'm going to go ahead and make a new one and we're going to say um uh uh we recommend using a predefined configuration to quickly set up compute training this one looks okay I don't know if it needs two nodes but uh I guess we can do this one so we'll just say binary we'll just say binary pipeline okay say save hopefully it's making a good suggestion and we'll have to wait for that to spin up it's going to take a little bit of time okay so I'll see you back here in a moment all right so I got message saying that that is ready so what we can do I think it was here my notebook instance no that's not it but I I definitely saw a popup on my screen uh uh you might have saw it too you'd have to be paying close attention for that but if you go over um it says that it's it's ready to go so what I'm going to do is make my way back over here we're going to select our compute there is our binary pipeline I'm going to select that and there are some other options we're not going to fill around with that we're going to go ahead and hit submit so we need a new experiment so I'm going to just say um binary pipeline we'll hit submit okay and so this is now running so after a little while here we're going to start seeing these go green so this is not started we'll give it a moment here just so we can see some kind of animation and there it goes it's Off to the Races there's not much to do here this is going to take a while I don't know I've have never ran this one in particular so I don't know if it's an hour or 30 minutes so I'll see you back when it's done running U but yeah it's it's not that fun to watch but it's cool that you get a visual uh illustration a so I'll see you back in a bit I just wanted to peek in here and take a look at how it's progressing here and you can see it's still going and it's just uh cleaning the data it's still not done um I'm not sure how long this has been running for if we go over to our experiments and we go into our I think it's binary Pipeline and we look at the run time we're about 8 minutes in and it hasn't done a whole lot so it's still cleaning the data I would have thought it be a little bit faster I'm kind of used to using like AWS and it goes um Sage makers uh this doesn't usually take this long um but I mean it's nice that it's it's going here but uh yeah so we're almost out of the pre-processing phase we'll be on to the uh the model tuning okay all right so after waiting a little while looks like our pipeline is done uh so if we make our way over to experiments and go to Binary pipeline we can see that it took 14 minutes and 22 seconds uh we can go here and just see some uh additional information there's nothing really else to see we saw all the steps already ran so you can see them all here uh okay and so let's say we wanted to there's nothing under metrics but um enable metrics log data points compare these did within across runs we only did a single run so there's nothing to compare so let's say we we're happy with this and we want to deploy this model well what I'm going to do is go back to the designer uh click back here and so now in the top right corner we can create our inference pipeline so um I can't remember if submits going to run it I don't want to run it again um I just want to go ahead and create ourselves a realtime or batch pipeline we's say real time pipeline here and what this will do is it will actually create a completely different pipeline so here's a completely new one uh but it's specifically designed to do uh deployment okay so this is now one was for training the model this one is actually for uh uh taking in data and doing inference okay so what we can do is we can go ahead and uh just submit this and so we'll put this under our binary pipeline here we'll go ahead and hit submit and I believe that we need a different kind of compute here I'm surprised that it's even running um no I guess it has a compute there so it's going to run and once it uh finishes running then I believe that we we can go ahead head and um uh uh deploy it okay so let's just wait for that to finish all right all right so after a little while there We've ran our inference Pipeline and so uh it's definitely something that is ready for use the idea is that when we actually it's going to go through this web service input to this web service output but uh not so important at this level uh of certification let's see what it looks like to to go ahead and deploy it so we have we have the option between a real-time endpoint and an existing endpoint uh we don't have an endpoint yet so we'll just say uh binary pipeline okay and notice we have the option between oh it just it wants to lowercase binary Pipeline and we have the option between Azure kubernetes service and add your container instance um it's a lot easier to deploy I think to container instance so because and we'll be waiting forever for kubernetes to start up so we're going to do container instance uh we have some options like SSL and things like that not too worried about it so so we're just going to go ahead and hit deploy okay and so that is going to go ahead and deploy that um so we'll wait for this real time inference if we go over to our compute uh it should spin up so this is for a uh AKs so I don't know if it will show up here I think only I've seen things under here but I think this will be for Azure kubernetes service and I don't think we're going to see it show up under there uh however um we do not need to be running this anymore so we'll go ahead and delete the binary pipeline because we're not uh we don't have it for any use right now and we might need to free it up for something else okay so go ahead and delete it we don't need it and uh coming back to our pipeline our designer here I'm just trying to see where we can keep track of it um well I know it it's deploying SO waiting for Real Time endpoint so I'll see you back here when this is done okay just takes a little bit of time all right so I think our pipeline is done if we make our way over to endpoints there it is the binary pipeline if we wanted to go ahead there we could test the data um and so it actually already has some pre-loaded data for us if we hit test it's nice that it fills it in E uh we get some results back okay so I mean and then we see like scored label and income and score probability so things like that uh that is um useful so it's giving back all all the results but I don't think it has yeah it doesn't have scored labels and scored probabilities which is the value we want come to come back here so there are end points and that is the end of um our Exploration with designer [Music] okay all right so let's take a look at what it would be to actually train a job programmatically uh through the notebook so remember we saw these samples over here and so we saw this image classification mnist and this is a very popular data set for doing uh computer vision and these are really great if you want to really learn you should really go through these and just um uh uh read through them because they're they're probably very very useful uh I've done a lot of this before so for me it's it's just it's not too hard to figure out but I've actually never ran this one so let's run it together again we want to be in uh Jupiter lab so you can go here and click it there or go to the compute if it's being a bit finicky and just here we'll get a tab open here and we'll see how this goes so what I want to do and uh is just make sure we're back here I'm going to click into this one and uh we have a few so there's part one and then we have the deploy stage so let's look at training I don't know if we really need to deploy but we'll give it a read here so in this tutorial you train ml model on a computer resource resources you'll be training and uh training and deployment workflow via the Azure machine learning service in a notebook there's two parts to this this is using the mnus data set and scikit learn and with Azure machine learning proba the SDK it's a popular data set with 70,000 grayscale images each image is handwritten digits of 28 times by 28 times pixels representing numbers from 0 to 9 the goal is to create multiclass classifier to define the digits in a given image that represents so we're going to learn a few things here but let's just jump into it uh so the first thing is that we need to import our packages so here uh it does that map PL plot lib inlines just make sure that when we print things that we visually see them we're going to need numpy and then mat plod lib itself the Azure ml core uh and then we're going to import a workspace since we'll need one there and uh then I guess it just checks the version making sure if we have the right version here okay so this is 1.28 z it's pretty common even this an AWS they'll have like a script in here to update it in case it is out of date I'm surprised it didn't include it in here but that's okay we'll scroll on down and by the way we're using python 3.6 Azure ml uh if this is the future you know they might retire the old one you're using 3.8 but you know it should generally work if it's in their sample data set I assume they try to maintain that okay so connect to a workspace so create a workspace object from an existing workspace uh reads the file config.js so what we'll do is go run that I assume it's kind of like a session and so here it says it's fig found our our workplace so really it's just it's not creating a workspace it's just returning the existing one so that we have it as a variable here create an experiment so uh that's pretty clear we saw experiments in the automl and the designer uh so we'll just hit run there okay so we named it cor ML and we said experiment I wonder if it actually created one yet let's go over to experiment and see if it's there so it is there cool that was fast I thought it would like print something out but it didn't do anything there uh so creator attach an existing compute resource by using Azure machine compute a manage service data scientist etc etc yada yada yada so create a a compute uh uh creation of a compute takes about five minutes so let's see what it's trying to create so we have some environment variables that it wants to load in I'm not sure how these are getting in here um I'm not sure where environment variables are set in um uh Jupiter or even how they get feeded in but apparently they're somewhere but we have it doesn't matter because these are defaulting so here it says CPU cluster uh zero and four it's going to use a standard D2 V2 that is the cheapest one that we can run um I kind of want something a little bit more powerful just for myself uh just cuz I want this to be done a lot sooner but again you know if you're don't have a lot of money just stick with what's there okay so and this is CPU cluster so if we go here I just want to see what her options are um not sure why it's not showing us options here you don't have enough quota for the following VM sizes so it probably it's because I'm running more than one VM right now yes I've s I've hit my quota okay so like I probably have to request for more um so I think this is the one I'm using what's the difference here this standard dv2 vcpus it's the same one right so request quote to increase I don't know if this is instant or not I'd have to make a support ticket oh that's going to take too long so the thing is is that uh because the reason is is that I'm running the autom ML and the design and the uh designer in the background here trying to create all the workshops or the uh uh the follow alongs at the same time but what I'll do is I'll just come back and when I'm not running one of those other ones then I will uh I'll come back here and continue on but uh we're just here at the step we want to create a a new uh compute okay all right so I'm back and I freed up uh one of my compute instances if I go over here now I just have uh the one uh cluster instance for my uh automl but what we'll do here is again just read through this so this will create a CPU cluster 0 to four nodes um standard D2 V2 I guess we'll just stick with what what is here um just reading through here look look like it tries to find the compute Target it's going to provision it it will create the cluster call Pool for minimum numbers of nodes for specific time so wait for completion so we'll go ahead and hit play and so that's going to go and create us a new cluster so we're just going to have to wait a little while here for it to create about 5 minutes and I'll see you back here in a moment all right so uh the cluster started up if we go back over here we can see that it's confirmed I don't know why it uh was so quick but uh it went pretty quick there so we're on the next section here explore the data so download the mnist data set display some sample images so it's just talking about it being the open data set the code retrieves in the file data set object which is a subass of data set file data set references a single or multiple files of any format in your data store the class provides you with the ability to download or amount files to your computer by creating a reference to the data source location Additionally you register the data set to your workspace for easy retrieval during training there's a bit more how-tos but we'll give it good read here so we have the open data set mnist it's kind of nice that they have that reference there uh so we have a data folder we make the directory we are getting the data set we download it and then we are registering it so let's go ahead and run that not sure how fast that is shouldn't take too long as it's running we'll go over here the left hand side refresh and we'll see if it appears um uh not as of yet there it is go into here maybe explore the data I'm not sure how would look like because these are all images right yeah so they're in ubite gz so they're in compressed files we're not going to be able to see within them but they're definitely there we know they're there so that that is now registered into our our data set uh display some sample images so load the compressed into a files into numpy then use map plot lib plot 30 random images from the data set from above not the step requires load data function it's included in the utils pie this file is included in the sample folder we have it over here we just double click very simple file the load data and we'll go ahead and run that and it's pretty pretty simple here uh so load data X train X test it are we setting up our training and testing data here it kind of looks like it because it says train and test data that's when we usually see that kind of split um and again it's doing a random split so that sounds pretty good to me uh let's show some randomly chosen images yeah so I guess they do set up the training data here and then down below we're actually showing the images so here's some random images train on a remote cluster so for this task you submit the job to run on the remote training cluster to set up earlier submit your job um create the directory create a training script create a script for run configuration submit the job so first we will create our directory um and notice it created this directory over here because I guess it's going to put the training file in there and so this will actually write to a training file this makes uh quite a bit of sense so if we click into here it should now have a training file it'll just give it a quick read see what's going on here so a lot of times when you create these training files you have to do and this is the same if you're using AWS like when you're creating tra like or sagemaker um you create a train file because it's part of Frameworks it's just how the Frameworks work but you'll have uh these arguments uh so it could be like parameters to run for training um uh and there could be a whole sorts of ones here here they're loading in the training and testing data so it's the same stuff we saw earlier when we were just viewing the data um here it's doing a logistic regression it's using Li uh so linear maybe linear learning model there it's doing multiclass on that there and so what it's going to do is fit so fit is actually performing the training and then what it's going to do is make a prediction on the test Set uh then it's going we're going to get accuracy so we're getting kind of a score so notice that it's using accuracy uh as a evaluation metric I suppose right and then at the end we're going to dump the data a lot of times like you have to save the model somewhere so they're outputting the actual weights of the neural network and all other stuff it's a plk file I don't know what that is but if you're using like tensor flow you would use tensor flow serving at the end of this a lot of times uh Frameworks will like Pi P torch or tensor flow or mxnet they'll have a serving layer um but uh since we're just using S kit learn which is very simple it's just going to dump out uh that file into our outputs this is going to probably run a container so this outputs isn't going to necessarily be on um the outputs into here it's more like the outputs of the container and um a lot of times the container will then place this somewhere so like it'll be saved on The Container but it'll be passed out to the register or or something like that like model registry so anyway we ran this and so that generated the file we don't want to keep on running this multiple times I probably would just overwrite the file so it's not a big deal here it says notice how the script gets saved in the data model so here it's saying the data uh data folder I guess we didn't look at that so we go top here um I didn't see this is data folder was it wasn't really paying attention to where that was guess it looks like where more so it's loading the data in so here it saves the data outut anything written to the strory is automatically uploaded to your workspace so I guess that's just how it works so it probably will end up in here then um so util pii reference the training script to load the data set correctly and copy the file over so um we will run this to copy the file over so I'm guessing did it put it into here I'm just yeah so just put it in there because when it actually uh packages it for the container it's going to bring that file over because it's a dependency so configure the training job so create a script run config the directory that contains the script the compute Target the training script training file Etc sometimes like in other Frameworks they'll just call them estimators but here it's just called a script run config so uh I'm just trying to see what it's doing so sidekit learn is the dependency okay sure we'll just hit run okay and then down below here we have script run config so it looks like we're passing our arguments so we're saying this is our data folder which is apparently here we're mounting it and then we're setting regularization to 0.5 sometimes you'll pass inde dependencies in here as well I guess these are technically are our parameters that are getting configured up here at the top right but sometimes you'll have dependencies if you're in uh including other files here uh and I guess that's up here right so see where it says environment and then we're saying include the Azure ml defaults and the pyit learn and stuff like that and so then it gets passed in the EnV so that makes sense to me we haven't ran that yet because we don't see any number here submit the job to the Clusters let's go ahead and do that so it says it returns a preparing or running State as soon as the job is completed so it's in a starting State monitor remote run so in total the the first run takes 10 minutes but the second run uh is uh as long as the dependencies in Azure ml firment don't change the same images reused and hence the start here start time is much faster here's what's happening while you wait the image creation a Docker image is is created matching the python environment specified by the azl environment the image is built and stored in the ACR the Azure container registry associated with your workspace let's go take a look and see if that's the case because sometimes like resources aren't visible to you so I'm just curious do we actually see it okay and yep there it is okay so they did not lie um so associated with your workspace image creation uploading takes about 5 minutes this stage happens once for each python environment since the container's cach subsequent runs during image creation logs are stem to the Run history you can monitor the image creation Pro process using these logs wherever those are if you if the remote cluster requires more nodes to execute the Run than currently available additional nodes are out added automatically scaling T typically takes about five minutes and I've seen this before where if you're in your compute here uh sometimes it'll just say like scaling because it's just not enough so uh running into the stage the necessary Scripts and files are sent to the compute Target then the data stores are amounted copied the entry script is run so entry script is actually the train.py file while the job is running SD out and the files is in the logs directory or stem to the Run history you can monitor the runs progress using these logs the dot outputs directory of the run is copied over to the Run history in your workspace so you can access these results you can check the progress of a running job in multiple ways this tutorial uses the Jupiter widget so looks like we can uh run this watch the progress so maybe we'll run that and so it's actually showing us the progress that's kind of cool I really like that so it's just a little widget showing us all the things that it's doing let's go take a look and see what we can see under experiments and our run pipeline because it was talking about things like outputs and things like that so over here in the outputs and logs I'm just curious is if this is the same thing I'm not sure if this uh is this Tails yeah it does tail it just moves so we can actually monitor it from here I guess that's what it was talking about um so here we can see that it's setting up Docker it's actually building a Docker image and then I'm not sure did it send it to I mean it's on ACR already I think it looks like it's just still uh downloading extracting packages so maybe it's actually running on the image now so we'll just wait there we pop back over here you know we can see probably the same information is it identical yeah it is so we're 3 minutes in uh it's probably not that fun to to watch it in real time and and talk about it so let's just wait until it's done I'll see you back then okay all right so I'm uh about 17 minutes in here I'm not seeing any more uh movement here so it could be that it is done it does say if you run this next step here it will wait for completion um specify show output to true for verbose log so here actually did output a moment ago so maybe it actually was done um but I just ran it twice so I'm not sure if that's going to cause me uh issues there so because I can't run the next step unless I stop this um can I individually cancel this one here uh I think I can just hit interrupt the kernel there there we go okay so I think that it's done okay because it's 18 minutes in and I don't see any more logging in here it's just not very clear and also uh the logs we just have a lot of stuff going on here like it's just so much so you know if we're keeping keeping Pace we probably would have saw all these created yeah so another we just had a few more outputs there but uh I think that it's done okay it's just there's nothing definitively saying like done do you know what I'm saying and then up here it doesn't say oh oh I guess it does say that it's done all right so yeah I just never ran it with this tool so I just don't know so I guess it does definitively say that I already ran this so we don't need to run that again I just feel like we'll get stuck there so let's take a look at the metrics so regularization rate is 0.5 accuracy is nine so N9 is pretty good the last step is training the script wrote in the output uh uh s SK learn I want to see if it's actually in our environment here I don't think it is so outputs is somewhere it's in our workspace somewhere but it's just not uh we just don't oh it's right here okay so it outputed the actual model right there um and so you can see the associated files that are ran okay we'll run it register the work model in space so you can work with other collaborators sure so if I click on that here and we go back over to our models it is now registered over here okay and so we're done part one I don't want to do all these other parts um training is enough as it is but let's just take a look at the deploy stage okay so for prerequisites uh we're setting up a workspace we have our we are loading our registered model okay we register it we have to import packages we are going to um create scoring script deploy to an ACI model test the model if you want to do this you can go through all the steps it does talk about a confusion Matrix and that is something that can show up on the exam is actually talking about a confusion Matrix but we do cover that in lecture content so you generally understand what that is but um you know I'm just I'm too tired I don't want to run through all this there's not a whole lot of value other than reading reading through it yourself here um so I think we're all done here [Music] okay okay one service we uh forgot to check out was Data labeling so let's go over there and give that a go so I'm going to go ahead and create ourselves a new project I say my labeling project and we can say whether we want to classify images or text um we have multiclass multi label bounding box uh segmentation let's go with multic class I'll go back here for a second um multic class whoops I I don't know if we uh create create data set but we could probably upload some local files uh let's say uh my St Trek Data set it doesn't let us choose the image file type here be nice if these were images going to tell us what here it's very finicky this input here uh file dis set references a single or multiple files in your public data store or private public L okay so we'll go next uh if we can upload files directly that'd be nice oo upload a folder I like that so what we'll do um is we do have some images in the free uh AI here under cognitive Services assets uh we have um we'll go back here and we'll say I think objects would be the easiest oh but we just want a folder right so yeah we'll just take objects yep we'll upload the 17 files uh yep we'll just let it stick to that path that seems fine to me we will go ahead and create it and so now we have a data set there we'll go ahead and select that data set we'll say next your data set is periodically checked for new data points any data points will be added as tasks it doesn't matter we're only doing this for test uh enter the list of labels so we have um uh TNG DS9 uh Voyager toss that's the types of Star Trek episodes um label which um Star Trek series the image is from say next I don't want enabled but you can have Auto uh enabled assistant labeler I'm going to say no we'll create the project okay and I'll just wait for that to create and I'll see you back here in a moment okay all right so I'm back here actually didn't have to wait long I think it instantly runs I just assumed like I was waiting for a state that says completed but it's not something we have to do so uh we have 0 out of 17 progress we're going to go in here we're going to go label some data we can view the instructions it's not showing up here but that's fine if we go to tasks we can start labeling so what season is from or Series this is Voyager we'll hit submit this is Voyager we'll hit submit this is toss we'll hit submit this is TNG this is TNG this is DS9 DS9 Voyager Voyager uh TNG DS9 you get the idea though and you got some options here like change the contrast if someone can't see the photo or rotate it this is Voyager Voyager uh TNG DS9 uh Voyager Voyager and we're done so we'll go back to our labeling job here we'll see we have the breakdown there uh and now our data set is labeled we can export our data set CSV Coco as your ml data set I believe that means it'll go back into the data sets over here which will make our Liv a little bit easier we go back to data labeling okay so you just granted people access to the studio they'd be able to just go in here and and jump into that job okay uh if we go over to the data set I believe we should have a labeled version of it now so my labeling project so I believe that is the uh the labeled stuff here right yeah so it's labeled so there you go we're all done aure machine learning uh and so all that's left is to do some cleanup [Music] okay so we're all done with Azure machine learning uh if we want to we can go to our compute and just uh kill the services we have here now if we go to the resource Group and delete everything it'll it'll take all these things down anyway but I'm just going to go a bit paranoid so I'm going to just manually do this okay hit delete okay and so we'll go back to portal. azure.com and uh I'm going to go to my resource groups and everything is contained it should be all contained within my studio just be sure to check these other ones for that and we can see all the stuff that we spun up we'll go ahead and hit delete Resource Group um I don't know if it includes like because I don't see like container registry right so I know like it puts stuff there I guess it does it says container registry so that's pretty much everything right and that'll take down everything so and if you're paranoid all you can do is go to all resources and double check over here because if there's anything running it will show up here okay um but that's pretty much it and so just delete and we're all done
Info
Channel: freeCodeCamp.org
Views: 92,519
Rating: undefined out of 5
Keywords:
Id: hHjmr_YOqnU
Channel Id: undefined
Length: 263min 51sec (15831 seconds)
Published: Wed Feb 21 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.