Finite Math: Introduction to Markov Chains

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello from welcome to the next video in my series on finite mathematics now I do want to say if you're watching this video because you're having problems in your finite math class I want you to stay positive and keep your chin up if you're watching this video it means you've come pretty far in your education up to this point so I know it's a little bit of hard work patience and practice you can get through it I have faith in you and so should you so also these videos are geared towards individuals who are relatively new to finite math so I will just be going over the very basics in great detail I will be working through the examples very slowly very deliberately and explaining every single step that we do so you have a very firm comprehensive grasp of what we're talking about so let's go ahead and dive into this videos topic now in this video we're going to be talking about Markov chains so this is a topic that comes up in finite math courses and it's probably one of the ones that gives students the most headache because it takes probability from the first part of the course and matrix operations from the middle part and then smashes them together into Markov chains or Markov processes so we're going to talk about what they are and then we're going to talk about how to set up transition matrices and how to set up transition diagrams based on the wording of the problem so let's go ahead and get started with an example so the first example we're going to look at has to do with auto insurance risk and it's a very simple problem that will get us started and how and how to understand Markov chains so an automobile insurance company place its its policyholders into one of two categories when the policy renews they either go into the low-risk category or the high-risk category now a motorist is high risk if the policyholder has received a moving violation or a ticket with in the past 12 months and low-risk if they've had zero tickets in the past 12 months so when the policy renews the insurance company will look back over the past 12 months if that person has a ticket that will get put in the high-risk group if that person does not have a ticket they will get put in the low-risk group it's pretty much that straightforward now at any given moment a motorist is in one of the two categories or states we call these states they're either in the low-risk state or the high-risk state of course this only happens when the policy renews so I could be in the high-risk category right now and my auto policy renews but in the past 12 months I have not had it's being ticketed or anything like that so when my policy renews I will be placed in the low-risk group so this is sort of a process depending on what has happened in the past 12 months you can go from low-risk to high-risk back to low risk etc simply based on your driving record okay so just one of two categories or what we call States now based on company data a motorist that is currently high-risk has a 60 percent chance of being denoted high-risk again when their policy renews and a 40 percent chance of being moved to low risk so let's say a motor risk right now is high risk when their policy Reno's there's a 60% chance they remain high risk or a 40% chance they go into the low risk group if you notice those percentages add up to 100 they have to go into one of the two groups now a low risk driver on the other hand has a 15% chance of moving to the high risk category and an 85 percent chance of remaining low-risk so a low risk driver right now when their policy renews they have a 15% chance of moving to high-risk and an 85% chance of remaining in the low-risk State with a low risk category so notice again those add up to 100% so here is our task for this problem we're going to set up a probability tree a transition diagram and a transition matrix for this process and again this is a very simple example that we get you started towards a better understanding of Markov chains so talk about what a state is a state is simply the category for this problem at least a motorist can be in at any given time so another way of thinking of a state is a category or some characteristic or something like that it just depends on the problem and this problem is sort of a category now each driver is either in the high-risk State or the lower state every driver is in a state and no drivers are in both states so every driver has to be either high risk or low risk based on their previous 12 months driving behavior and of course a driver cannot be in both states they're mutually exclusive now drivers can move between states or return to the same state when their policy is renewed so if they're high-risk now they could be high risk when it renews or move to low risk if they're low-risk now they can move the high-risk or remain and low-risk so they're going to move to some other state when it renews whether it's the same one or a different one now important note here this information only tells us the probability of moving from state to state you know in this case it's from moving category to category when the positive Reno's so it's about the transitioning it really doesn't tell us right now anything about the probability of starting in either state so we don't know at least right now what the probability of being high state or low is right the second we just know when the policy comes up that we have probabilities of where they're going to go from here now there are problems that we're going to do in a later video where we do know the probability of being high risk or low risk or whatever right now but we're just focusing on the transition diagrams the transition matrices so we're looking at just the movement from state to state to state to state over several processes in this case several renewals over time okay so what are Markov chains officially basically there are combination of probabilities and matrix operations so again you learn a lot of probability usually in the first part of finite math and then somewhere towards the middle or two-third mark you learn about matrix operations and then Markov chains take those two things and combine them into sort of a model that allows you to do more complex type problems so they model a process that proceeds in steps so steps in time or some sort of sequence or trials you know over over a period of steps it's basically what we're doing here it's it's a series of probability trees but with a Markov chain we can do a process or a series of probability trees you know that are way numerous so that's a that's a technical term way numerous but we could do 5 10 30 different steps into the future now if you want to do a probability tree that has 13 branches on it or 30 branches on it go right ahead so this keeps us from having to do that you'll see what I mean by that as we go the model can be in one state at each step when the next step occurs the process can be in the same state or move to another state movements between states are defined by probabilities and remember from our problem we gave you those and we can then find the probability of being in any given state many steps into the process and this is the really part and the power of Markov chains so for example I've been with my current auto insurance company for ten years now based on this model if my insurance company did this model which they I don't think they do we could find out what is my probability of being either high risk or low risk ten years from now it's pretty cool huh so we can do things far into the future using Markov chains in the matrix operations that go with them so there are some really good solid practical problems where Markov chains are used and we'll talk about those as we do more problems so they're very very handy for finding out things far into the future that would be impractical we couldn't do a ten-year probability tree well we could it might take this out of a wall though to write it all out okay so it's back to our problem here now we're talking about where we're going from to where we're going to so let's say we start in the high risk State where can we go where can we go to as far as States go well we could go to the high risk State when our policy renews or we could go to the low risk State when our policy renews again that just depends on our driving habits over the past twelve months now remember the probability of going from high risk and being renewed and going back in the high risk is 0.6 or 60% the probability of being in high risk having your policy renew and then going into low risk is 0.4 so this is a very basic very simple probability tree if you're doing this at the end of a finite math course then you've had these type of probability trees before so this is going from high risk State then either to high risk or low risk and then the probabilities that go with each or we can start in low risk then of course we can go from low risk to high risk or low risk too low risk and of course those are probabilities so remember the probability of going from low risk to high risk when your policy renews is 0.15 and the possibility probability of remaining low risk is 0.85 so let's put these together so we start in high risk we can go either to high risk or low risk then we have the probability of 0.6 and 0.4 notice those probabilities add up to 1 or we can be in low risk right now that means we can move to high risk or we can move to low risk and those probabilities 4.15 and 0.85 so look at this picture right here it's one of the most important pictures you'll see with respect to understanding Markov chains all what we're doing is moving from state to state over a series of steps so we can think of on the left-hand sides where we are now and then on the right-hand side it's where we could be when our policy renews every six months or a year or whatever it might be and since there are two states we can go from state to state with a certain probability it's just two probability trees it's very straightforward very simple stuff now of course we're going to put it in a more complex form as we go but at its heart this is basically what it is okay so if you need to pause this video right here take a look at this diagram because this really is the heart of what we're getting at and Markov chains okay let's go ahead and continue now we just did a couple of probability trees but in Markov chains we draw transition diagrams it's this different way of representing the exact same information okay so on one node we have our high risk and on the other one we have our low risk those are our two states we could be in now we're going from a state to another state just reminder that going from high risk too high risk is point six high risk too low risk is point four and then low risk to high risk is 0.15 low risk to low risk is 0.85 now we do some arrows to show how we could proceed through this process so if we start in state one our policy renews and we're high risk again we go right back into that state so we go from high risk to high risk and of course that probability is 0.6 now if we're in high risk now we could go to low risk if we've you know been a good driver but the probability of that is 0.4 now notice here are a few things that the probabilities add up to 1 so every arrow coming out of a state has to add up to 1 so 0.6 and point for up to 1 and of course you can go from state 1 to state 1 or you can go from state 1 to state 2 now for state 2 if we're starting here we can go back in the low risk that's a probability of 0.85 or we can be moved in the high risk category which is a probability 0.15 and congratulations you just did your first transition diagram this is it so think about the power of how we could use these let's say you were with your insurance company for five years or five renewal periods whatever it might be and you come in and they say okay you're a good driver you're going to start in low risk so that's where you kind of start and then the next renewal comes up and you've been a good driver so you stay in low risk and then maybe something happens next year and then you get a high you get a ticket so you're moved in the high risk so now we've gone two years into the future you start it in low then you went to low and then you went to high let's say you had a clean driving record again then you went back to low then maybe we can under ticket and you went back to high so that would cover your five renewals so you can sort of picture how we would go through this diagram if I grab the marker here we can actually show that so if we started here and low-risk right here now let's say in the next year you're also low-risk again so we come back here like that but then you get a ticket so the next time you go over to high-risk right here then the next year you're good so you go back to low-risk here and let's say you're low-risk again to the next renewal you come back to low-risk and then you have a ticket again unfortunately and you go back in the high-risk so that is five renewal periods five years into the future so we went from started and low and from low to low low to high high to low low to low and low to high and to figure out the probability of doing that all we'd have to do is multiply together all the probabilities in that process that's it and again we're going to do that in future problems so the power of Markov chains allow us to take these things far into the future many many steps and in this case it's years okay so I'm going to I can clear that there we go let's go ahead and continue now we can also write this as a matrix so I want to tell you here the two probability trees we did at the beginning the transition diagram we just did and the transition matrix we're going to do it's this it's different ways of representing the exact same information just reminder here are probabilities for high to high high to low low to high and low to low now we set this up as a matrix on the left hand side we have what we're starting from and then on the top is where we're going to so the probability of going high to high would be in the top left corner of our matrix the probability of going from high to low would be in the top right and then so on and so forth so if we transfer these probabilities to a transition matrix it would look like this so you can see their probability of going from high to high risk is 0.6 probability of going from high risk to low-risk is 0.4 notice that row adds up to 1 then the probability of going from low risk to high risk is 0.15 and the probability of going from low-risk to low risk is 0.85 exact same information it's just represented in a different way because we have to have it in this way to do more complex problems involving Markov chains so again notice that each row sums up to 1 alright let's do one more example and then we'll be done with the introduction that leads to Markov chains and this one has to do with basketball free-throws or foul shots so in certain foul situations a basketball player is entitled to two free free throw shots two foul shots now a coach with the team analyzes the team data and learns that if the player makes the first shot he or she is twice as likely to make the second shot as they are to miss ok let's think about that if the player shoots the ball and makes the first foul shot they are twice as likely to make the second one as they are to miss it now notice there are no numerical probabilities in here there are relative probabilities so we're going to figure out the numbers here in a minute now if the player misses the first shot he or she is three times as likely to miss the second shot as to make it so if the player shoots the ball and brix it off the rim then the player is three times as likely to miss the second shot as they are to make it so again at least in the real world terms it's this idea of confidence so if you make the first shot you feel like you can say you have a better chance of making the second one on the flip side if you miss the first shot it kind of gets in your head a bit and you're three times as likely to miss the next shot is you are to make it but again there are no probabilities actually in the text we're going to figure out what those are so again we're going to draw a probability tree tradition diagram and transition matrix for this problem so we're we're going from and - well let's say we make the first shot so we'll call state one is a make what can happen the second time where we're going to well we can make the second shot or the player can miss the second shot so we're changing States now notice all it says is two times as likely to miss our two times is likely to make then Miss so we write that this way to X and X so two times as likely to make is to miss so 2x and X I remember what do those have to add up to they have to add up to one so we can set up a simple little algebra problem here 2x plus X has to equal one so that's 3x equals one so x equals 1/3 simple algebra problem but now we have to go back and substitute that back in into the problem so if X is 1/3 then the probabilities are two-thirds and one-third two times it's likely to make is to miss so those are our two states that we can go from and then two and the probability associated with each state we can also miss the first shot so if we miss the first shot we could make the second one or miss the second one now it said three times as likely to miss the second as to make it if you missed the first time I guess it kind of gets in your head and you lose your confidence so three times as likely to miss as to make so we say 3x and X I remember those have to add up to 1 so using algebra again X plus 3x equals 1 so 4x equals 1 x equals 1/4 so we substitute X back in and our probabilities are 1/4 and 3/4 3/4 so if we missed the first shot we have a 25% chance of making the next one if we missed the first shot we had a 75% chance of also missing the second one let's go ahead and do a transition diagram so our state one is make state two is miss so that's just a reminder of our probabilities there as we're stated in the problem so if we make the first shot and when we make the second one remember that was 2x make the first one miss the second one that's X because we're two times as likely to make the second one as to miss if we made the first shot now if we miss our first shot we're three times as likely to miss again on the second one as to make so all I did was write in the relative probabilities in new our transition diagram that's all I did and luckily for us we already had and went ahead and figured out the probabilities so if we substitute those back in our transition diagram looks like this so state one to state one or make to make is two-thirds make to miss as one-third if we miss that our miss to miss is 3/4 and miss to make is 1/4 so this should be making a fair amount of sense right now it's actually pretty straightforward you have to do a couple examples to get the flow and see how they're set up job our transition matrix so there are probabilities just as a reminder now if we go ahead and put in our relative probabilities from the problem number make to make as two times as likely as to miss so we have 2x and X in the top row if we miss the first shot then we're three times as likely to miss the second one as well so that's 3x and X same thing we had in the trees same thing we head in the diagram so if we go ahead and put that into our matrix it looks like this so we have 2/3 and 1/3 on the top of course that adds up to 1 and on the bottom we have 1/4 and 3/4 that also adds up to 1 again same information just represented in a different way okay that's reviewing them we're done so remember the Markov chains are just a combination of probabilities and matrix operations so there you saw that we use our probabilities to do our trees we use our probabilities to do our transition diagrams and then basically to do the transition matrix we just sort of moved the probabilities into a matrix form they model the process that proceeds in steps so time or sequence or trials so you can sort of think of our basketball shots as two trials so the player shoots the first shot they make or miss and then they take the second shot and make or miss in terms of our auto insurance we can think that something of that is time so you're placing the category when you first sign up your insurance when it renews you are put into the same or different risk category and then the next time you renew you're putting the same or different next time you renew same or different etc so it's a process over time the model can be in one state at each step and these problems we had two states so in the auto insurance we had high risk and low risk in the basketball shots we had make and miss but they could be anything they could be whether or not a customer uses ketchup a or ketchup B that's a common one that's a common Markov chain problem that's used in marketing type situations we could do whether or not a student graduated or not from school or something like that so they could have two states graduate not graduate so you can see a state can just be any sort of category or characteristic as long as they're mutually exclusive and a person or whatever can only be in one of the other when the next step occurs the process can be in the same state or move to another state so in our Insurance example if we're high-risk now we could be high-risk next time or low-risk depending on our driving behavior of the year before in the basketball situation if we miss the first shot we could miss the second one or make it so we would go on to a different state now movements between states are defined by probabilities I think we've gone over that pretty well in these two examples this remember the probabilities out of a state into the next you know step have to equal one now the power of Markov chains being able to take these you know sequences or time steps or trials many steps into the future so you know for the car insurance example we could figure out the probability of being higher low-risk 15 years into the future for the basketball example we could figure out you know over the course of a season you know how many makes or misses we expect in situations where our players take two foul shots we could do that with probability trees what they would be the size of a small wall so these allow us to do that and in future videos I'll show you how that works okay well we are done with our introduction to Markov chains I know this was a lot but I wanted to give you several very concrete examples that you could repeat in your work or follow along with you know step by step so just reminder if you're watching this video because you're having problems in the course I want you to stay positive and keep your head you've come this far I know with some hard work and patience you can do well and again I have faith in you and so should you so all that being said thank you very much for watching I look forward to seeing you again next you
Info
Channel: Brandon Foltz
Views: 168,127
Rating: 4.9172564 out of 5
Keywords: finite math markov chain, finite markov chains, markov chain, markov chain introduction, introduction to markov chains, markov chains, finite math, markov, finite mathematics, statistics 101, math, anova, steady state matrix, brandon foltz, probability, logistic regression, machine learning, brandon c foltz, chain, initial state vector, transition diagram, transition matrix, linear regression, steady state vector, statistics, markov process
Id: tYaW-1kzTZI
Channel Id: undefined
Length: 29min 29sec (1769 seconds)
Published: Mon Nov 05 2012
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.