AI-900 - Azure AI Fundamentals Study Guide

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Everyone. In this video I wanted to provide a kind of tips and tricks for AI900, the Azure artificial intelligence machine learning exam that gives you the AI fundamentals certification. As always, first, there's a lot of preparation goes into creating these videos, so if it is useful, a like subscribe, comment and share is always appreciated. Now, as always for for all the Microsoft certifications, I would start with looking at the Microsoft certification page itself if we actually go and take a look at the page. It goes through the detail of exactly what's in the exam. So we have this skills measured and you can actually click the download exam skills outline that goes into all of the different objectives, the skills, and then the individual items you need to know about. So make sure you kind of looked at this track well, what's changing? And then you can see in red anything that's changing around those various things just to make sure yours ready as possible. And what I really recommend you do is there's this free online set of training that uses Microsoft learn. So definitely, definitely, definitely go through these minimum. Maybe you find other resources as well. The more resources the better. It's going to help a quick pure for success. But at minimum, go through the exam skills outlined, go through the online free training. It's going to put you in a good position. The AI 900 is very high level. It's not super deep. The exam is not going to quiz you on detailed steps to perform any particular thing. It's really about understanding well what are the services, what are the types of AI machine learning we might want to do, and that's what it's going to be asking. It's all multiple choice. There's no case studies, there's no hands on. So what I want to do is kind of go through some basic things to think about as maybe a last minute kind of prep. Remember, this exam is about the Azure solutions around AI. And we can think about everything we're doing here is computer solutions to mimic human behavior. So if you think, well, we have this kind of brain thing up here, and then we have different kind of senses. We kind of have vision, we have speech capabilities. We can hear things. And all of those senses lead into the brain. It's very terrifying. Human being looks a bit like me. No hair, the brain. power has burnt all that hair away? But the idea of all of these things is if we have these models around machine learning, well, we're going to have certain information we think about. Well, there's information. IE datasets for example are going to come in and also what we're then going to do is we want to do some training, so we use different models to train. And we're going to end U with what with the info and the training we're going to have learning O, we're going to use models to train and then we'll be able to use those once we've done this model well, to do different things now that might be for example, to make a decision. It might be a prediction. Um, this release, other things, it might be a result, but essentially what we're doing in all of these things is we have these models and datasets and training. Some of them are prebuilt, some of them we're going to do our own training to actually enable these various things. But the goal is to create this ability to predict. Based on data, based on facts, that's everything we're kind of doing. Now there were various different types of activities of very common one is like anomaly detection. Hey, we look at past results and then is that good or bad? Does this indicate a certain thing is going to happen like predictive maintenance? Hey, the temperatures rising on these sensors, there's going to be a failure in X amount of times. But it's all about getting those various signals to enable us to do something. These decisions prediction again. I guess I'll write this in result. It could be as simple as like a chat bot. Hey, we get these sort of questions in. Though. And we give some output back out. Now when we think about Azure and these kind of services, it's the Azure machine learning. So we have this Azure. Machine learning. Now, the way we're actually going to use that is this Azure machine learning is this complete kind of platform that we're going to leverage. And. There's various aspects to that platform now. The first thing we're actually going to do is we're going to create a workspace. The Azure machine learning workspace is where we're actually going to go and we're gonna create this and then start doing the various things that we're going to leverage. Now to actually use the workspace as an Azure machine learning studio. That's going to hook into that workspace. Now you may have seen this already. If you go through those lab exercises, it's actually walk you through using this. This is this kind of machine learning ML dot Azure. Dot com. Now when we create that workspace, if we want to kind of do our own training As for example, well. We need some compute, so one of the things we can add to the workspaces we can provision compute. And then what do we actually need? Remember. Well, we've got to do that, learning that training, but we need information. So in addition to provisioning some compute on our workspace, what we're going to create some data sets. This is what we'll use to actually train the environment, so these things will actually be used for our training. Now there were different types of models we can actually use for that training. Um. For example, I can think about that data set I have coming in. That data set is actually for example structured. The data is tagged and it has some kind of value for example. So if I have that kind of supervised set of data, so let's do a different color. So if we have these kind of supervised. So when we have that. I have really three key types of models. We have regression. Now, the point about regression is essentially what I'm going to get out of that is some numeric value. And that's based on that past history. So if I had fed this in, for example, comic books, if I had a whole set of data sets about comic books and attributes of comic books that have characters, the age, the condition and then the values. So now I could pass in attributes of a comic book I have and it would spit out, well, the value of that. That would be regression. Then we can think about classification. And as the name suggests, with classification I'm going to get a Class A category. Is going to come out that based on the features. So for example a classification here might be hey I'm applying for a loan. And it's gonna give me a high or low risk based on the various factors that I feed into that. And there's also time series. Now time series is kind of the same as regression, so it's regression plus time series set of data. So this is going to really let me do is get a numeric. For future point in time. So how much is my car going to be worth in two years based on these conditions, these various types of things? Now this is all where my data is supervised, that data set. Actually had the data labeled. If I don't have that, there is also an unsupervised. So based unsupervised, I'm really just going to feed in the data because I don't know what the outcome is going to be. And what it would do is it would look at various similarities across the data and it will actually create for me kind of those categories the various organization. That it will actually get. So the whole point here is, so in Azure, hey I've got this workspace, I have my compute that I used to do the training with my data sets I apply. So these are all the various models. Supervised, unsupervised, feed that into the training and then of course once I'm done, well, I'm going to deploy it. So now I think about actually deploying my trained. Model that I can actually do the prediction and use, and I'm going to typically deploy that into production to AKS. So that's really production. If I'm just testing, I could use Azure container instances. And from that point, once it's actually deployed, I can now have clients connect to it. So I could have those various client apps. Who actually want to use? That trained model I've created would go and connect in. So remember to connect in. They're just. There's a number of different things to use, like there's an endpoint. There's going to be a key. So I have this kind of prediction key that gives me access to that a project model. There's pieces of data they're going to use. To actually go and connect in and now use that. So that's kind of the picture when I think about well within Azure, how I can start to use and train and make these things actually available. So that's great from a. How do I use it? But there's this huge risks when we think about anything artificial intelligence about what? How does it make those decisions? Is there's a familiar thing? Garbage in, garbage out. If my data set is not good, it can lead to a lot of problems, for example bias. Imagine you had a data set and it was any English person. I was bored is going to be a villain. That's the only data set I've read in. When I applied for anything, it would say, well, John is a villain, he's bored and he's English. So these if I have errors in my data, again I'm training it with this data so errors in the data will cause harm. I'm using data for the training. Well, that data might have sensitive information, so that data is exposed. That's a problem. I might create solutions that don't work for everyone, so it's not going to be available and applicable to everyone that wants to use these things. We're trusting these models are correct. Who is responsible for the various decisions that these computers are making? So when we think about kind of the machine learning, this isn't just Azure, these are all these kind of principles. But when I think about this whole pattern. There are essentially 6 key AI. Principles that we really want to use for everything out. And Microsoft has a whole site on this and if we actually jump over super quick. It's worth having a look. It's just microsoft.com IIS responsible AI. So if we look here. This is the website. And it talks through the six key principles. And so if we think about these for a second, then you can go and kind of look at that website. But the six key principles are really based around fairness. So if I think fairness we're talking about all people should be treating fairly. There is no bias which comes from that data that's used to train it. I can think about. Reliability. And safety. So we're putting decisions prediction in the hands of these various models we create. We need to make sure it's very rigorously tested. We make sure we've got very good deployment processes. Imagine this was for example self driving car. And we didn't do very good testing. Well, there's there's a real issue there that it could cause loss of life. So you have to think about how rigorously are we testing the reliability, the safety of these systems, because the more and more we put in the hands of these systems, the more important it is they're fair, they're reliable, they're safe for us. We think about privacy. So we have privacy and security. Once again, we're training it with data. We have to make sure that data used is kept private. It doesn't leak. We don't share it with other people to use for their things. It should be inclusive. The system should include and work for everyone, from every part of society, regardless of gender or race or physical abilities or anything. So this should be applicable to everyone. Should be transparent. If it's making decisions. We need to understand how it works. We need to understand when it works, when there are maybe limitations to it. Those should be very well stood out in statements. For example, if we can only test, we only know it works in very well lit conditions. We could say, hey, this system has been tested and works in these conditions. Outside of this, you should consult other things. We need to be very clear about how it works, when it works, limitations of it, and obviously we still have to have accountability. Ultimately, there should be governance. There should be organizational principles that guide that the solutions that are created meet ethical legal standards. There's always these Gray areas on certain like facial recognition. A facial recognition could be fantastic. If used in the right way. It can make life easier. It can increase security. If used in the wrong way, it has privacy implications for us. And so that accountability, that idea around having ethical standards is super important in really everything that is done. So just need to make sure that we understand those things. So in the exam you can expect to see, well, what elements of our six principles does this apply to? So take a look at that website and make sure you've really kind of looked. And you understand. What they really mean watch the videos, feel pretty confident about that, OK? So with that. Let's go back and actually think. Well, what exactly are? The types of service, what are the types of AI that we typically want to use? So I'm going to start with vision so I can think about, OK, so we're drawing kind of vision and I'm drawing in just kind of the. Is. For all of these things, no matter what we're doing, it's obviously using the brain as well. It has to then interpret and do something with this. So if we think for a second about computer vision. There were many different types of computer vision that we will typically see. For example, I can think about kind of image classification. So the image classification, well, what is this image? So I can think about. Well, OK, there's. Is kind of a here and. That's supposed to be a car and maybe there's a tree, forest. So you might say hey, classification is car or person or group of people. I might also have the idea that I'm just saying hey. This is a a description so I can have a description of the image. So I had the idea that yes, I have classification, but I also might have kind of analysis. Of the image. Now building off of that on this analysis, well instead of just telling me hey it's a car for example over here or car in wood, black and white picture. And now might start to say, well I could have that actually same image. But if I think about kind of analysis now, it's maybe like tags, so it's going to return to me car tree. Et cetera. So it's giving me the tags of what is actually in the image. Now within these classifications, these analysis, there are actually kind of these special domains. And we kind of see things about like celebrities, sports people, film stars and landmarks. So if I use these specialized domains and show a picture, it might be able to say Ohh Statue of Liberty, Arnold Schwarzenegger. Will not say John Savill. So there's this option to have these specialized domains within these kind of image analysis of what we're looking at now also. We have the idea. Well, object detection. Now the difference here is with. Remember the classification analysis is looking at the picture and it returns you a result about it. But if actually now think about sort of object detection that same picture for example now. And I have the idea of I've car, it's getting worse and worse and then maybe tree. Well now what it will actually do is it will put a box in there, we'll return coordinates. So it will say OK, well I'm gonna put a box around this object. I'm going to say a car. And I'm going to give a probability, a confidence level in that result .9. And around this one, well I'm going to say tree and I'm point. 8 confidence. So the object detection we actually get a set of coordinates, the boundaries for the objects. And then the description and then that confidence level actually of them. So that actually shows us the location of the items within the image then. We actually have the next step. I'll draw a little bit of a gap is semantic. Segmentation. So if we think about this is really kind of a box and the object and a confidence. This is all about the pixels. So if this semantic segmentation again if we kind of go to that same picture. And again I had kind of the the car and the tree. This time when it's going to do is basically cut all those pictures in and it will say car is anything blue and it would color the tree in. And say tree is anything green. So actually color the pixels on the picture to match what the actual object is that we're seeing. So that's kind of the a very powerful capability. Then there was sets of services around facial. So when I think about the facial, there's different elements. This could for example just detect the faces. So hey, there's a face there. Again, a box for where the face is. This could have attributes. So age, happiness. Remember once I saw something at a baldness level and it was very highly confident in the baldness level of that. Well, we can actually have recognition. So we know who a certain person is. So these different sets of capabilities around this now you have to be a weather limitations, things like sunglasses it can handle. So I can think about hey we've got a picture here with the person and what that is supposed to be a person. Or strawberry, whatever. But if you put sunglasses on it, well, they can handle that. They're seeing the points of the nose, the mouth, but maybe an extreme angle would break that. So you have to be understanding of what it will do. But the whole point is it will kind of return the coordinates of kind of where there's a face and then attributes. Happy, sad, bored, age, male, female. And again, you have to be careful with those key principles when we start. Identifying age, detecting male, female there can be. Concerns around those things? Also when we think about from a vision perspective, obviously OCR. Detecting maybe a few words in a picture so I can think about how I'm reading just text and image. Or maybe it's importing very large amounts of text and there's two different APIs when I think about getting a few words. Compared to actually huge reading for the read API for larger amounts then it's things like forms recognizer. I have a big form I want to bring in. So I've got picture of a form. I want to bring it in. Maybe I've got some kind of questionnaire surveys. I want to bulk kind of bring those things in. I'm actually use them. So I can think about fight for the OCR I mentioned. Well, there's an OCR API. And there's a read API. If I'm just doing like a couple of small amounts, so this is kind of small text, small amount. And this is for large. Kind of. It's basically a large amount of scanned text. This is a picture with a couple of words in it. This is actually how I'm scanning in maybe a page from a book for example. So there's all these different types of kind of computer vision service that I may actually see within those things. So next we think about natural language speech and actually hearing, and then through that hearing, maybe transposing it to text, maybe translating it to other languages. Maybe a combination of those things. So now I can actually think about OK, well actually go and use red. So now I'm thinking natural language. So I'm thinking about, well, I'm kind of speaking and I'm kind of hearing these things. Let's zoom in a bit. And once again, there's different elements to this I can think about. Well, OK, their speech. And speech actually entails a number of different things I can think about. Well, it's kind of. Speech to text. So it's hearing now when I think about speech to text, there's really different aspects to speech to text because I have to think about well. I have to take the various audio elements. So I think, well, there's audio comes in those audio waves. The audio waves have to go to those phonemes. So I think about, OK, I'm going to my phone, make sure I spell it right. Nope, I didn't. Though names. If I think about what is the phoneme is kind of the unique sounds that exist. So we have letters of the alphabet ABCDE and then the phonemes are the actual sounds like cat could act. And so we take the audio to get the phonemes and from the phonemes we actually get words. So we get those flow of different things. So a speech to text has to be able to do that. So that's kind of speech recognition to bring those things together. I can also think about well, text. True speech. So here we have some written thing and we're actually going to to say it out maybe well, which voice do we use for that and then I can think about kind of speech to speech. I speech translation. And there's a huge number of languages supported. But I'm hey, I'm hearing it in one language, we're going to output it in another. So there's all these combinations of different things we can use. There's different APIs. But anytime you hear anything with speech as part of the requirement and it's using those kind of speech capabilities, whether it's translation, whether it's hearing, whether it's speaking, it's going to use those things within that speech. Now there's also I can think about was part of natural language. I can think well. There's various other text services so I can think about well. There's natural language, but then sure, there's text. To rethink text, well, there's analysis. So a textual analysis. I'm looking at a document and. I'm doing various things that I want from that document or that passage of text, and sometimes we're looking for kind of key phrases. So I can think about a key phrase is what is the key talking point? I can be looking for entities. So that could be a person, a place I might be trying to do sentiment. So if a sentiment, is it kind of positive? And you're going to kind of get its value. So for example .99 will be super high or is it negative? .1. Neutral will be maybe kind of a a .5 type thing. It might also detect language. Now if it can't detect the language, you'll kind of see N. So that's if the language is ambiguous, it cannot be detected. It can't tell. It's kind of understand those things. So we have that text analysis, that sentiment analysis can be very, very powerful. It's very common to see, for example, hey, I'm scanning Facebook, I'm scanning Twitter, we get the text in, we send it to that sentiment analysis, return back positive or negative and then we kind of do something with it. So we have analysis. We also do have things around translation. And that's kind of over 60 plus languages. With that translation, um, again, I could think about translating the speech. There's translating. Yes, there's text. But it is also kind of translating speech. And when you think about the speech side, you can actually do various things. There's filtering, like I can have a profanity filter, so. Don't cover that stuff. I could have a selective filter to say, well, don't translate these things. Maybe it's a brand name, so don't translate those things. So we have these kind of capabilities around the text and then there's kind of language. Understanding. So when I think about language understanding, you'll often hear this kind of LUIS. That's the language understanding intelligent service. So that can be both an authoring or a prediction type of resource. And its goal is about maybe detecting intent from an utterance. So like if you have devices in your house like a smart home, you say, hey, turn on the lights. So it can detect the intent. I turn on the light. So we think about the entities and the intents and the utterances, we train that in the model. So now it can actually go and do those various things we actually want. So that's all around that kind of Lewis. So those are kind of some of the key things just from the computer vision, the natural language. Then there's also a set of capabilities around kind of conversational AI. And I'll kind of draw that down here so then we can think about that bigger. Conversational. AI. So these would kind of be interactions. It's not a beard, it's kind of he's talking so different types of interactions with the users. And there's two elements really to conversational AI I can think about. Well, there's this Q&A maker. So this is all about creating a knowledge base. Now that knowledge base, you could create it directly inside the knowledge base. I could think about importing an FAQ from somewhere else. Could be a chitchat data source, it could be a web page, doesn't really matter. But it's all about kind of these queue, question and answer pairs. So I'm creating that knowledge base and then I actually want to. Do something with the knowledge base, so then we have. The Azure bot. Service. And what the Azure bot service is going to do is a framework. To develop. Manage. And actually and publish. These bots. So what the bots are actually going to do is it's going to take this knowledge base. So it takes the knowledge base you've got here. And it's going to expose it through various channels. And a single bot could actually expose to multiple. Like I could have a bot that exposes a knowledge base to a web-based chat to teams just by having kind of multiple tools so it can have more than one within a single bot element. So think of the bot is the interface to the knowledge base and then I'm exposing it out there. So those are kind of the types of service we think about. Obviously, there's a lot of them. But it's all about, hey, I can see things, be it objects or text or forms. We have different types of, well, hearing speech, translating languages, outputting speech. Analyse text. What is the sentiment? What is it asking me to do that natural language understanding and then actually interacting with people? So if these are all capabilities, well. What? How do we actually use them then? We actually get to or what are the Azure services? So what are the Azure? Kind of AI. Services. Now when I think about that, there are a lot of them. Some of them are specific. And then there's a general one, so I can think from a many capability. Maybe it's just the Azure cognitive? Service. So this gives me kind of 1 endpoint. Key. That I can now use regardless of what service I want. So this has many of the different services contained in this single cognitive service. So if I just want to be able to use many different cognitive services, I want it as simple to administer as possible. I don't care about how the different Billings happening. If I create a cognitive service, I can use pretty much any of them. There are some exceptions to that, but if you see a question that's like, hey, I need a single service to use multiple, I want to do vision and text, the answer is going to be cognitive service. And then you have the idea of kind of specific. Now specific is going to be hey you just need to use this one or I need to be able to individually track the billing or usage. And even within these. Individual ones. Sometimes they can only be used for authoring. Or a training or prediction. So if you select both, it will actually create two separate resources. So from a specific perspective I can think about computer. Vision. So we kind of talked about computer vision. Now these are all pre built. No training, so I cannot train it with my own images. For example. I can do the analyze of the images, I can do OCR, all those things. So this does include OCR. But I can't use my own images to go and train and do those things. Then you have and computer vision. Is included in cognitive service, then you have custom. Patient. Now we've custom vision, I can use it for training. Or. Prediction. If you select both. It will create two resources so you can say both. But it will create two, one for each. And with the custom so I'll get a key and point for each individual one so I can separate that tracking. But now I can actually use my own images for example to do that training, so it gives me that capability. Once again, that's part of that. Then we have the face service. So the face remember is about that identifying people, grouping, analysis, classification. That is part of that. Then we have forms recognizer. And this is important because this is not. Part of that so forms recognizer is that ability to import an image to a form. I think it's 20 megabytes Max size going to put that. But that is not part of that general cognitive service that can do most other things. We have. A text analysis service. So it's another service we can leverage there. There's a speech service. And remember, if the question says speech in any way I could think about, well, this is kind of text to speech, speech to text, speech to speech I translation. Um. It's going to be that remember you can combine those things together. For example, it could hear German and it could output to English text. So that that's going to be the speech capability there. Then there is the translator text. And again that's those kind of 60 plus. Languages. And there's certain codes you configure you can translate to multiple with a single call. So once again you can have multiple 2 for the languages you want to go to. So I could have a English coming in and in a single call go to German and French for example. So we have that ability within there. And again, that's got those kind of once again, that profanity filter. And it also had kind of that selective. So I could say, hey, don't translate this, maybe it's a brand name, for example, don't want to do that. And those things. So the text analysis I've got to draw the dots is in there, is in there, is in there. So all of these. The only exception is really that forms recognizer. We have language understanding. So over the language understanding is again, this can be authoring or prediction. If you say both, it's going to create separate resources for them. This is where you define those kind of entities. I define the intents utterances. And then it goes and builds that. And then those final two services, so I have remember that Q&A. Maker. And I had that remember the Q&A Baker makes the knowledge base which are those question and answer kind of pairs. Again, remember you can create that in the tool. I could import a document, I could import a chitchat. It's different things I can do to create that. And then there's the Azure. Bot service. And these are not? Within cognitive service, these are separate services, but again, this is all about essentially. Developing and managing those bots that take the knowledge base and make it available to some kind of channel. So the key thing I I would really say is. Remember for these, anytime you see a question that says hey you need multiple, it's probably going to be unless it says you need multiple and you want to track individually the cost or I want to isolate the key, it's gonna be cognitive service. If it says hey I just want this one service and I want a single, I want separate keys and endpoints. It's going to be the individual specific one, so understand the capabilities of them. Make sure you understand the types of kind of AI, the machine learnings. Make sure you understand those kind of key AI principles around the fairness, reliability, privacy, inclusion, transparency and accountability. And again, if you go through that Microsoft training course, it will have you go and create a workspace, use the studio to provision some compute, make sure you delete it afterwards. We're going to pay for that. They have some sample data sets you can bring in to do different types of training against the models and then actually see that all in place. It's not a super complicated exam. It's real goal is just to cover breath. Make sure you understand there are different types of service. When you would use what types of service, that's really just the goal of the exam. So take your time, do the prep. Don't overthink it. Don't panic about failing. If you failing, big deal, it will tell you where you're weaker going. Restudy those elements. You'll pass it next time. So good luck. Until next time. Take care.
Info
Channel: John Savill's Technical Training
Views: 142,050
Rating: undefined out of 5
Keywords: azure, azure cloud, ai-900, azure ai, azure ai certification, azure machine learning
Id: E9aarWMLJw0
Channel Id: undefined
Length: 46min 9sec (2769 seconds)
Published: Tue Jan 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.