BigO Notation & Time Complexity Tutorial (Code Efficiency & Algorithm Analysis)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys and welcome back so I actually just finished editing all the footage you're about to see and I want to give you a quick overview and quick recap of exactly what this video is gonna be about because is a bit of a longer video than I typically post so this video in this tutorial is about time complexity and big o-notation now I don't focus tremendously on exactly how to write big o-notation and go super in-depth but my main goal is to give you guys kind of a fundamental understanding so that you can use that knowledge to apply it to your own programming and your own code a lot of people aren't aware of the effects that their programming and those scripts their writing have on their code and how slow it can possibly rerunning so in this video what we do is the first 15-20 minutes we kind of go through and talk about what is big o-notation how it works quick summary and then we actually get into a bunch of algorithms that i've wrote and we analyze them and see how fast or how slow they are this should hopefully allow you to write better code faster code and if you don't know this obviously you're gonna learn so it's a really fundamental skill if you're in any kind of computer science field where a programmer in general it's something you definitely should know and if you don't I highly encourage you to watch through the entire video and make sure that you actually understand everything that I'm talking about so with that being said let's get into the video before I get too far into the content I'd like to announce that I'm actually partnering with Microsoft and they're sponsoring this video for me to talk to you guys but a new project they've started called the Microsoft dev collective now this is a really exciting and great opportunity for you guys and essentially the Microsoft dev collective is a community of developers where you can go to learn and collaborate right now by just simply signing up and becoming a part of the dev collective you're gonna gain access to over 30 free courses that's right completely free you don't have to pay for anything there's a link in the description and if you go there it's gonna bring you to a page that has a bunch of courses that look just like this let me show you okay guys so I'm on the Microsoft dev collective website here I'm just gonna click on Explorer and I can have a look at all of the different courses that they have to offer as of now so like I mentioned this is a brand new thing and there's gonna be new content added on here monthly so make sure you guys are continually having a look at this because since the first time I looked at it there's been some new courses and I know that they're adding a lot of different content that's gonna be super valuable so I just signed in here so I can gain access to all these courses but essentially I'm gonna go to topics and just go to languages and this is where my tutorial will be showing up and essentially you can have a look at just some of the stuff they have like programming foundations programming foundations object-oriented design and make your C shop sharp code more functional just a ton of awesome resources and I know if you guys have a look at here you're probably gonna find something that interests you and it's free so what's the parm and having a look at it and checking it out so anyways that's been it for this little odd thing I hope you guys did listen to this and I hope you're gonna get some value from this and now on to the video okay so what this tutorial is really gonna focus on is time complexity in Big O notation now I'm not trying to make you guys pros in this but I really want by the end of this video that you guys are gonna be able to look at certain algorithms maybe something that looks like this analyze them and get some kind of idea of how long it's gonna take for them to run now this is a really important skill it's taught a lot in computer science in general I don't really care if you guys know how to do like Big O of N or N squared or like n plus-1 I just want practically you would be able to look at an algorithm and be like okay so like based on my knowledge kind of time complex seeing how things work this is gonna take a long time to run and the reason why that's really important is because when you guys are writing code I see it a lot with beginners they have no idea about how long it actually takes for something to run and if you're trying to run things on like thousands and thousands of lines of input or less massive I don't know like text files or something like that and you have a really inefficient algorithm or inefficient code it's gonna take forever and it might not even run so I just want you to be conscious of the fact that your code really has an effect on how long it takes for the computer to execute something and typically the faster something is the better now we're gonna talk about a bunch of different algorithms here that I've personally just wrote myself some of them are like very well known but I've just created a few and I'm gonna be doing this in JavaScript but it doesn't really matter if you don't know JavaScript because this is just I'm not gonna be writing any code you're not gonna be we're just gonna be analyzing it so as long as you can kind of get an idea of what this code is doing and you've programmed before you're gonna have no problem following along with this if you don't know JavaScript okay so with that being said let's get started let's talk about Big O notation and complexity okay so what is Big O notation well Big O notation why is this not drawn I'm on the eraser that's why Big O notation is essentially just a fancy word or fancy notation for denoting how long it takes for certain algorithm to run so yeah I've already said algorithm like a hundred times but that's what we're gonna be focusing on algorithms now what is an algorithm essentially an algorithm is a set of instructions and typically in computers based on some input it takes like a certain amount of time to execute and that's what we're really gonna be focusing on here so algorithms set of instructions Big O notations how long does it take this set of instructions to run based on some kind of input so what we're mainly focusing on here is input okay so in a specifically input size and then that's gonna be related to time so based on some kind of input size how much time can we relatively expect this algorithm to take now I'm not saying we're gonna measure it in seconds we're actually just gonna measure it in like relative operations and what I mean by that is like an operation could be something like adding 1 to a number or like grabbing information from an array that's how we're gonna measure an amount of operations because obviously based on different computers things are gonna run a different time and there's a ton of different factors that affect the actual time but we just want to get some kind of idea like are we gonna sit here for hours or is it gonna happen in a few seconds and that's the main idea of Big O notation now quickly before I go on something that's important to understand is that when we talk about Big O notation we're typically talking about very large massive amount of information now what I mean by that is for example if you have an algorithm that needs to run something on like five numbers okay let's say we have like one two three four or five like this is our list okay you need to do something with this list well computers are super fast like they can do thousands of operations per second so if we have an algorithm it's like pretty inefficient but we're only taking like I don't know we're only using like five elements like we're not gonna be using that algorithm very often we just then it doesn't really matter how inefficient it is because it's still gonna happen almost instantly you know what I mean it's like if you write one line of code and it have been replaced with like five that was a lot faster well if we're only using like a really small amount of input it's kind of negligible because you're saving like maybe like a millisecond right so it's almost just we don't really care about smaller data sets we only care about large data sets and you guys will see what I mean as we go through here okay so now I'm hoping you guys have a little bit of a math background because I'm gonna start kind of just drawing some functions here and talking about growth and yeah so what we're gonna do is I'm gonna draw two axes here okay and I'm gonna call this my time axis okay and this is gonna be n now I haven't talked about and yet since we're gonna do now so essentially remember I was talking about we only really care about large data sets or like large amount of information essentially we're trying to figure out based on some kind of input how long is our algorithm gonna take relative to that input is gonna take like five times that input like how many operations are gonna do on that input that's what we want to know so our input like if we have let's say an array okay the length of that array is gonna be noted denoted by n so n you're gonna see me use this a lot this just means the length of the array so if we have an array like this we go from 1 comma dot dot dot to n so the length of this array is n that's really important we're gonna be that's gonna be brought up a lot so let's get rid of all this now this eraser is not that easy to use okay there we go so based on some kind of input and we're gonna be figuring out how many operations in other words like how much time is it gonna take us to run so I'm just gonna draw two functions on here and I want you guys just to think about which one you would rather have okay keep in mind everything we want is like super efficient like that's what we're aiming for okay so let's draw this and let's draw a purple one that looks something like this okay so we kind of have these this intersection point here at this purple dot but these are three different functions now if you guys know anything about math you should know what these functions might be so for example if this function could be something like time equals n this function can be something like time equals log and and this function could be something like time equals and then maybe something like N squared okay like a quadratic function which mean it would go like that too but we're not we don't care about the negative side okay so three different functions now I'm gonna ask you if you're running an algorithm and you want it to run as fast as possible based on like massive amount of input which function would you take like let's say these are three algorithms these are how long they take do you want this one do you want this one or do you want this one for how much time they're gonna take just based on their graphs expecting that this is gonna continue on to infinity right which one would you want well I would hope that you guys would say this purple one right that you guys would want this purple function which is login now the reason you'd want that is obviously because as the size of the input increases the time is very slowly increasing like look have a look at it here right so at the beginning it increases faster than all the other ones but once we get to this point here it starts slowing down and the slope is actually decreasing as we continue to go all right let's have a look at the red one in comparison well the red one is actually wait it's it's faster up until this point right because the time is is slower but as soon as we hit here you can see it starts exponentially increasing increasing increasing and keeps going and this green one well this one's fine but it's just a straight line it's linear so essentially it increases at the same rate the entire time based on n time is gonna be exactly n right so just we want this this log n now some people would say though well what if our input size is like let's say this is like N equals 5 okay let's just say that's the number I know it's not let's say that's number well if we had an input of like size 3 so maybe like somewhere here which function would you want we'd probably want this red one because the time is the shortest but the thing is yes you would want that for the shorter amount of input but as this input gets larger larger larger larger larger you can see that this goes way faster so that's why we really care about when n approaches infinity so when n is like a massive number as opposed to anything on kind of the left side of this like intersection point between different algorithms and different functions I hope that kind of made this is the visual right of the speeds for our time okay so we've just kind of found out now that log n right this is this is a way we're gonna use Big O notation with log in as well we'll talk about that in a second is way faster you can do way more operations with log n than N squared right or then time at not more operation sorry it's just faster in terms of time so these are kind of a few standard function times like how algorithms run in these certain times okay so let's now start I guess maybe analyzing some algorithms talking about how we actually write Big O notation and stuff like that okay so remember I said our input size is n so n is our input okay now when we write Big O notation so we're trying to figure out how long something's gonna take to run or we know how long it takes to run what we do is we write a Big O we put brackets and then in the brackets we put some kind of function based on our n so the function could be like n plus 1 or n plus 2 sorry it could be like N squared it could be log n it could be anything like that now remember how I was saying we only really care about massive inputs right so when n approaches infinity well if you've ever done anything with limits you understand what I'm about to do here but let's say we have a function R let's say the amount of operations that some algorithm does is like 3n plus 7 okay like this is the function like if I were to draw it then we would have like a little grid this would be 7 we would go like that okay that's what our function would look like now we only care about massive input sizes so what we can actually do is we have a function that looks something like 3 n plus 7 we can actually simplify this and what we can do is we can remove all of the constants now what are constants in this what's a constant a constant is something that doesn't change it stays the same so let's have a look at this this +7 this is a constant so essentially whatever the size of n is so like n could be infinity we're gonna do that times 3 so 3 times infinity plus 7 now let's let's think about this right so if n is infinity because that's care about massive inputs does adding 7 to infinity really change how long this is gonna take like imagine imagine 10 billion add 7 to it it's still pretty close to 10 billion right like that 7 didn't really make a massive difference so what we can actually do is we can omit this 7 we can delete it and why can we delete it because it's a constant even if the constant was something like like 10,000 we can get rid of it we can delete it and the reason we can do that is because we only care about the input size at massive numbers so we can get rid of those constants all right listen let's now we have 3n so what about what about 3 there any other constants left well 3 is actually a constant as well and the reason it's a constant is because it's not a variable like any number that's not a variable is a constant or a function so we have 3 okay so this is a constant as well so let's think about this now so what we've done is we've simplified this to 3 times infinity now 3 times infinity what is that equal well that actually equals infinity 3 anything times infinity plus infinity divided by infinity is still infinity because it's just an arbitrary term we don't really know what infinity is so what we can actually do is we can remove this 3 as well so we went from a function that was 3 n plus 7 to simply n right so if we compared them on a grid we would have had this one which was that original function and now we have a function that looks kind of like that right so maybe this one had like a higher slope like that but now we have one that looks like this now the reason we can do that you're like well these functions are different I agree functions are different but the thing is they're both linear meaning that as n increases what do you code the time increases linearly not exponentially it doesn't grow any faster it grows at the same rate and that's what we care about the rate of growth of our function we don't really care about how like the exact number that it is we just care if it's linear if it's quadratic if it's exponential if it's cubic that's what we care about and we're gonna go through a bunch of examples of different ones right now ok so that's how we kind of simplify our things we can remove the constants meaning we don't really care but like it was plus tens or this plus fifteen to the three times we only care about the actual kind of variable value and what that means in terms of growth so just the type of function really so let's do let me just draw a few examples of some pretty common what do you call it like Big O notation there's some pretty common like runtime complexities alright so let's do a grid and we already did the linear one right we already did the log and we already did the quadratic which looks like that okay these are three very very common Big O notation Zoar like runtime analysis that you're gonna see a lot and when we start going into actually analyzing algorithms you guys should understand this a lot better okay so those are three common ones but there's two more that we want to talk about and we're gonna rank them in terms of like how long they take which one is slower which one is faster so we didn't log in but what about what do you call it let's see what about two to the N so we did um what was it we did N squared but we also did two to the N so who knows what's faster two to the N or N squared well two to the N is actually way way way slower than N squared so this is 2 to the N okay and if we do N squared N squared looks something like that so they both grow exponentially but 2 to the n grows way faster than N squared so if I gave you two algorithms and I said Al gwon well it's time it takes two to the N to run and I said how to that takes N squared which one would you pick you'd pick N squared okay so that's just important to understand cuz a lot of people don't know to the end although it seems like oh and to the N is a way it takes a lot longer and a lot more operations than N squared I'm gonna show you an example of two at the end of the second as well okay so there we go so that's another example of one some other common ones can use more than one variable so sometimes when we do algorithms we might have more than one list or more than one I don't know input right we could have two inputs like say we have a function and it takes two inputs so let's say we have an input and one and we have an input and two now a common runtime analysis for two inputs is something like big of n1 plus n2 now can we simplify this any further right so remember I was saying like if we have 3n plus 7 we can just scratch out the 7 we can scratch out the 3 we only care about the end because that's kind of the rate of growth right and then I guess we can do the same thing if we have something like 3 and squared well we can get rid of the 3 and we'll just keep the N squared because that shows us kind of the shape of growth which what is what we care about ok what if we have n 1 plus n 2 well the thing is these are different lists right are different size inputs so we need to keep both of them because this one could be like way larger than the other one so we need to keep both of these so whenever you have kind of two variables you have to keep them both alright so now we've kind of talked about that a bit we're gonna show I'm gonna show some more advanced examples of like big way notations on how you kind of simplify them and then we'll actually start analyzing and hopefully they'll start making more sense okay so now I'm just gonna look at a more advanced example and how we can simplify this and remember we really care about the shape of the function right so what we're gonna do now is this might be a bit complicated you guys you might not understand why I'm doing this but I'm gonna say if we have a function that's like 3 n cubed plus n ok how do we simplify this well first of all you'll say ok we can get rid of the 3 correct so now we have n cubed plus n now the thing is what term in here defines the growth of the function which one increases the quickest well n cube is like like that whereas n is just a straight line so obviously n cubed is the faster term so it's the more important term it really dictates how large we the inputs gonna get like a thousand cubed is a large number and then you're gonna add a thousand to it right so what we can actually do here is we can simplify this to just be n cubed now why can we do that well first of all these are the same variable ok that's important to know if this was n cubed plus like n/2 so N 1 cubed plus n 2 we couldn't do that because n 2 could be like a massive number which would actually change this right so different variables to meet both of them but if we have the same variable so n cubed plus n n cubed really dictates how the functions gonna grow and that's the principal term that's what they call it so we can simplify this so n cubed plus n to just be n cubed and that's important understand we're not gonna this is like a bit more advanced when you desert doing stuff like this but that's just important understand that we really like this is the term that's gonna dictate how fast the function grows and that's what we care about so we would just say this is Big O of N cubed because that's what dictates how fast it grows ok so now we're done with all the drawing let's start getting into actually the real analysis actually we're probably gonna have to still draw but all like well we're gonna read through some of these algorithms or should hopefully be a bit more exciting and entertaining ok so close that so this is our first algorithm and this is known as the linear search algorithm and essentially what this algorithm does is it give in some input so of size n so in this case we're giving it a list or an array it's gonna find a given number so for example here we have this list or this array all right that has how many elements one two three four five six seven eight nine ten so we have N equals 10 right 10 elements and then what we're doing is we're gonna pass to our function the array and the number that we're looking for and it's gonna attempt to find this number the index where it occurs so what this does it does it has a for loop and it looks and it says if we find like we loop through the array if we find it will log that index so let's just first run this you can see I've been testing them here so I'll just call this one linear search okay and you see found at index 4 so essentially it found 32 at index 4 which would be the correct index for 32 all right so how do we figure out the Big O notation of this function this is our algorithm right and we're just we're using the algorithm down here but this is our algorithm how do we figure out the Big O notation well let's try okay so this is our algorithm obviously the stuff that's highlighted what we want to do is figure out how long this is gonna take to run in terms of n so arr so this right here is n ok and this can change obviously we could have more elements we get have less elements so ARR up in this parameter is going the length of n right that's what we're saying so that's length of n this list down here's what we're passing and what we're doing let's actually let me undo all this so we can actually see everything that's happening here is uh see we're gonna loop through the entire array so essentially we're gonna say I is gonna go from zero dot dot dot to n minus 1 right because we're gonna loop through the entire array so if the length is ten will be only go to 9 because index 9 hopefully guys get that so how many operations are we doing based on an input n well we're gonna do all of this stuff n times so what are we doing in here right because it's gonna happen end times well what we're gonna do is we're gonna log something and then we're possibly if we find that we're gonna break so we're gonna end the loop right we're gonna check something and then we're gonna log something so we could say that this is like one operation because we're checking something we could say this is two operations you'd say this is three operations but really right like we're just doing one thing on each loop so what we would say is well for our input n we're gonna do one thing each so we're essentially we're gonna do end things that's what we're gonna do we're gonna do n operations on our input end so we'd say that our Big O notation is of n now some of you might realize though that we might not necessarily do n operations like let's see if if I erase all this right so we're looping through the entire array but notice that if we find the element that we're looking for we're gonna break which means we're gonna stop looping now this is another important concept with Big O notation so right now we loop we go to 4 right so we're saying ARR dot length which is what 10 so we're gonna do this 10 times so n times but if we happen to find the element before we reach the end of the loop we're done we're gonna break out of it and we're not gonna continue looping so why wouldn't you say that this is Big O of like a constant like let's say 5 right because we only need to run that loop 5 times until we reach 32 and we end well the thing is we don't know how many times we might have to loop in a worst case this element 32 could have been at the end of the list right and in that case it would have taken us well n times and loops to find that element so we're always looking at the worst case which is what is the worst case that this algorithm gruntin so for example the worst case for this is that if we have a list where 32 is at the end that would be the worst case for this algorithm it would take longer why would it take longer well because we're gonna break once we find the element but if we don't find the element until it's at the very last index that means we had to look through the entire list so we had to do n times we had to loop end times until we eventually found the element now the best case would be if the element was at the beginning so if we had 32 instead of 1 right here then we would simply loop once well we would say 32 equals 32 and that we printed out we would break and I would only take one time there'll be one operation but the thing is we don't know where our elements gonna be because this list could change right this list could be any size it could have a different amount elements so we're always caring about the worst case which is Big O of n because we could at most loop end times so it's like it's unlikely that we'll loop end times that'll be at the end but it could happen so we need to account for that and that's how we do this it's always the worst case you're never thinking about the best case you're always thinking about the worst case I wish there was like a clear for this because this eraser isn't really working well okay let's just close this all right close okay apparently it doesn't want to close so give me a second guys let's try figure this out ok so I got that closed anyways so this is that's our first algorithm and this runs in will put a comment here Big O of n which is also known as the linear time so essentially the longer the list gets longer it's gonna take and it's linear so like if the list increases by 1 it's gonna be at most one more operation we'd have to do so this is Big O of n hopefully you guys understand this and we're looking at this right it's one for loop which means that this loop could run at most n times inside the loop we do a few things if we find the element but we really only care about how many times the loop is gonna run because that's what's gonna dictate how long it takes for this algorithm to to actually execute ok alright so next one that was our first one linear search binary search so you guys might have heard of this before but essentially what this does is given a sorted list it's gonna find it's gonna do the same thing as before it's gonna find the first occurrence of any element that we give it and tell us where it is in the list so let's just run this first of all so I say by the way I'm just using nodejs to run this stuff I don't know if you guys care but anyways the binary search Jas you can see what it found one finds one at index zero if I change it to be like 8 I should find one of these eights for us so let's run this and you see it finds it at index 9 and I mean you can count the index if you don't believe me so essentially it just finds the element but the thing why is this different than this they do the same thing right given given an array and given a number they tell us where that number appears why is this different well this runs a lot faster than the last algorithm and why does it run faster well we're gonna we're gonna talk about this right now and how this works so this is known as the binary search algorithm and why it's called binary is because it every time it runs it decreases the size of our list and you guys will see how this works in a second so essentially what this does is it looks at so we looks at the middle index of our sorted list and think about if you have a sorted list and you're looking for some kind of element so we're gonna look at the middle index and we're gonna determine if that middle index is greater than or less than the so in this case we're looking for eight okay the middle index in our list would actually be okay let me change this to be like ten so it makes more sense so we're looking for ten okay the middle index in our list would be something like eight so we know this list is sorted meaning it's going from least to greatest so if we look at the middle index which is eight we say well is 10 greater than or less than eight well 10 is greater than eight which means that we know that 10 cannot actually be in this left half of the list or the array why can't it be there well because it's sorted and if 8 is the middle and 10 is greater than 8 there's no way it can be behind 8 so why would we bother looking at any of these elements well we wouldn't and we're not going to okay so let's say we cut that portion of the list right so the highlighted portion we're no longer looking in and now we're now what happens is our list turns into this the highlighted portion we have 8 8 9 9 9 9 right and we're still looking for 10 so what we're gonna do is now our list is simply this in just this part with the brackets so we're gonna do the same thing we just did except with this list we're gonna find the middle index of this list which in this case would be 9 and we're gonna say is 10 greater than or less than 9 again tens the element we're looking for we say well it's greater than 9 which means that all the elements to the left here are gone we can remove them so our new list becomes this and now we say well YP look why would we bother looking at this portion when we know it can't be there right so now we have this list and now what we do is we look at the middle index in this case is 10 and one of our checks is is it greater than is it less than or is it equal to and we find that 10 is equal to 10 we know what this index is and we return that to our what do you call it to our like from our function so that's how the binary search algorithm works so now let's try to figure out how many operations this takes so let's bring our up our drawing thing here all right that's not the one I wanted sorry one second I want this one and let's let's dissect this okay right so we have a list let's say we have a list of like 1 2 3 4 and every time we run this loop right every time this runs so the while loop here which is right here we split the list in half so our first loop we look something like this our second loop looks something like this and our third loop would look like this right because every time we essentially cut off half until we get down to a one element and the worst case for this algorithm is that we look through every single possible what he call it like we look through the entire list until there's only one element left left and then if there's one element left it's either the element we're looking for or the element doesn't exist that we're trying to find right that's the worst case for this algorithm because technically like say we pick the middle index of two and that's equal to what we're looking for let's say we're looking for two then we'll just return that index right away so it could happen faster than possibly splitting it up all the way okay so we split this up split this up and then we're left with this so our input size was four right okay but we only bran at most three operations really really only two because this like the first operation brought us to this we really only did two operations as operation one this is operation two right we just ran the loop two times loop ran two times so what is that actually equal to well I want you guys to figure it out so I'm gonna do one more example okay so let's increase this list now to be five six seven eight okay so now we've just doubled the size so it was four so it was two there but now we've doubled the size and we've added those what operations do we know do now so we start and we split the list in half so we're either gonna take the left side of the right side so I mean we can pick it doesn't really matter we'll say we'll say we take the left side so now we're at 1 2 3 4 okay now what do we do split this list in half again 1 2 what do we do we split it and we get 1 so notice that now we have 1 2 3 operations we doubled the size of our list and we only added one operation or one more run of this while loop okay cuz this is what this while loop does essentially but how does that work well where what function is this we talked about it this is actually log base 2 of n and what that means essentially is based on whatever the size of our input is we can find how many operations that most will run by doing a log base 2 of N and log base 2 MN is a function like this that actually decreases in slope as it as and gets larger so this is a super fast algorithm right and we would you want to use this if we want to find something in a massive list the only disadvantage of it is you need a sorted list otherwise you can't really do something like this right if you don't have a sorted list and that is essentially how it works in terms of binary search and that's why it's log base 2 of n now I want to talk about quickly I really wish there was a clear thing for this men so I could alright well that's fine it's flawed in the app maybe I'll make one myself but what we do here right is like these are all the operations that could run each wallet right so every time we loop through like this while if essentially is gonna happen at most log log base 2 of n times ok but we do like a bunch of different operations in the fall to plague let's say we do this let's just say we do like 5 operations every time it loops well wouldn't you say that that's like 5 times log base 2 event well you'd be correct we do five operations log base two times right because this happens log base two times or mmm sorry but we do five operations but the thing is five is a constant right and we don't care about constants so we just remove it and we say that this is then Big O of log base two of n and that's how that works okay so we'll go a bit faster now for these next ones so now we're going to example one and this one is just one that I wrote myself and essentially what it does is account duplicate elements so let's try to analyze this and figure out how long this is going to take so the important thing to look at when you're analyzing something is the amount of loops right because remember we don't care about the constants so all the stuff inside of these loops it doesn't matter how much you're really doing cuz we're just gonna omit the constants anyways like say you're doing something like n times but you do like every end you do five things that's five n well we just get rid of the 5 so we don't care about the constants right so how many times does this happen we're gonna count the duplicate elements so it's actually given some kind of array we're gonna find how many elements appear more than once so for example be like this one there's two duplicate elements because two is duplicate and six is duplicate even though there's like three twos just because two appears more than once like that counts us once you know what I mean like that it's a duplicate element that's what we count so I mean I'll show you that this works first didn't mean to that example one okay and you see that we get three duplicate elements and that's because seventy seven six and two two two right so how long how many how long does this take to rotten well let's look at it so what we're doing is we're looping through the array dot length okay so that's gonna happen what n times right cuz array the length of our array is n that's our input size this is length and whatever that length is so this is gonna happen n times this first for loop but then inside of that for loop what do we do we loop again n times we say we're gonna loop again through the entire list so we're looping once and then every time we loop once we're looping through the entire list again so essentially every one of these four loops we're looping end times so every time that this runs we loop end times again so this run this runs n times but this run is n times and this runs every time that this runs so we say that this runs n times like that and this runs end times but since this for loop is inside of the other one it means that every time that this runs this is gonna run so you might be seeing what I'm getting out here how what it like how long does this take to run well this actually runs an N squared and why does that happen well because for every n we do something end times so essentially if that's n multiplied by N and n by n is N squared so this is quadratic and as the length of our input increases this is going to take a lot longer to run these are things we want to avoid is embedded for loops and embedded while loops because well it takes a really long time to run and if I were to put another for loop in here like this and say it does something well then this would run if it ran n times right so this runs n times this runs n times this runs n times so we're gonna run n cubed times which is a crazy amount of times to be running right so that's how we kind of do that so I think that's I mean that one's pretty straightforward I don't need to really do too much for that ok so let's go to example two example 2 ok so all this one is doing and it's really it's similar to last one if we're just gonna print X Y Z and this is a good example actually I don't want to print it yes it's gonna give it away what we do is we take three variables like ABC as our input and what we're going to do is we gonna loop a times B times and C times okay ABC times so this one's a bit more complicated actually how what's the big o-notation of this how long does this take to run right so we can say a B and C is like our n like our input how long it's gonna take to run well what we do is we run a times we run B times a times I know it's confusing but we run this for loop B times every a time so that's a times B right so we run these two for loops a times B times now what about this third one there's another for loop inside of another for loop so this runs C times so every time B runs this runs C times and every time B runs this every time a runs this runs B times so this is actually a times B times C now what can we simplify it to be like n cubed well you can't because they're three different variables right if it was one variable and it was like a a a then we would say a cubed because it's eight times eight times a but since it's three different variables we say this runs an a times B times C and I'll prove it to you so I'm gonna run this now example two Jas and watch my watch my screen okay so we get a bunch of output let's scroll through it so this little lot these lines of code ran like this many times and then you'll see so we have X it starts at 0 Y is 1 x is 0 Y is 1 sad is 1 x is 0 Y is 1 set is 2 x is 0 Y is 1 that is 3 so you see that essentially Zed goes 1 2 3 4 0 1 2 3 4 0 1 2 3 4 and that runs a ton of times whereas X stays at zero and goes to 1 goes to 2 goes to 3 and then once it's done it's done so you can this kind of gives you an idea of exactly how this works right so you have X 0 y 0 0 0 right and then Y will change to 1 and then Zed will change right and that's exactly how it works so that's how we can kind of determine that this runs a times B times C times and I mean if you want it to count these amount of inputs you would get 5 times 5 times 5 which is essentially 125 lines ok that's how long that runs I don't want to close that ok example 3 so this one okay so this one actually is gonna generate all of the binary numbers from that are of length 16 so it's gonna start from 0 and go to length 16 so all of the binary numbers that are of length 0 to length 16 does anyone know how many numbers that is it's a lot of numbers I'll tell you that right now now this works using something called a queue based algorithm I'm not gonna explain what that is because it's a bit confusing but essentially I'll run the program and then let's try to figure out how how long this runs it ok so let's go down here it's example 3 J s alright ready I'm gonna round this watch my screen this is gonna take a second to happen and that's what I'm talking about with the time complexity right I'm only making all of the binary numbers that are of length 16 but look how long this is taking to run and if we didn't know anything about time complexity and we weren't printing this out to the screen we would not probably have realized how long this algorithm that I created takes it's super inefficient but it's really the only way to generate all of these different numbers so you can see I mean if you like if you scroll through we have a ton of different binary numbers I can't even reach the end the the last input like the start one so this how long does this take to run well this actually takes two to the N to the N times where n is essentially like the length of the binary number which is going to be 16 in this case okay because that's how many possibilities of binary numbers there are 2 to the 16 and I mean we can prove that pretty straightforward I'll draw this to you if you guys know anything about binary I don't want to use that I want to use this right so binary numbers you can either be 0 or you can be 1 it's 1 or 2 and they can obviously have like an infinite length so we are doing all the binary numbers of length 16 which means that we can well I'm not going to draw them all out but let's do an example with 3 so if we have 3 digits for a binary number then we have we can have 0 0 0 we have 0 0 1 0 1 0 0 1 1 so you are and then this sorry this goes 1 0 0 1 1 1 No 1:01 sorry why is this so difficult to do 1 1 0 and then 1 1 1 so we have 1 2 3 4 5 6 7 8 different binary numbers for 3 digits ok for 2 digits like let's just scrap what row should we scrap I'm trying to ok let's grab this row alright and then have a look like these zero zeros use it so these are all the same as the ones above so we get 4 so 4 2 digits we get 4 4 3 digits we get 8 4 4 we get 16 and we essentially get 2 to the amount of digits so in this case we get 2 to the 16 so that takes a really long time to run right our input size was only 16 because we only doing 16 digits but 2 to the 16 is a massive number and that's why I took us so long to run that program so you know if we had had an algorithm that could do that in linear time then we would only have ran at 16 times and that would have happened instantly but it was happening in 2 to the 16 which means that it took a long time and that's another reason obviously we need to understand time complexity and how long these things are taking now again notice right like I'm doing all of this stuff in this loop like I'm doing 1 2 3 4 5 6 7 8 operations every loop and this loop is gonna happen well 2 to the 16 times okay this happens to the 16 times multiplied by how many operations is this 8 now here's the thing right like 2 to the 16 is a massive massive number multiplying it by 8 yeah it makes it 8 times larger but really like we can just neglect this because it doesn't really tell us anything more about the shape we know the shape is exponential and it's happening like very taking a really long time so by multiplying it by 8 doesn't really give us much value it just kind of tells us it's 8 times larger it doesn't tell us the shape of the increase which is what we care about we just care about the shape of the increase ok so that's kind of it for that one what was the other example I did ok so this is one with two input variables we're going to talk about now how we do this alright so we have array1 and a rate too now how do we do this with two input variables well essentially what we do is we just say we're gonna look at the first for loop we're gonna look at the second for loop and then we'll just see what happens in here and this first for loop well what what is this run it well this runs in length one okay so it like array one has length of n one array two has length of n 2 so this first for loop is gonna happen what and one times and this for loop the second for loop happens and two times but it happens every time that N one happens so we say this happens and one this happens n two so since this four loops inside of the other four loop we simply say this is happening in n1 times n2 time in other words Big O of n 1 times n 2 and that's how we determine that so this runs in n 1 times n 2 and the reason we can't just simplify this is because obviously these variables are different and with this list was like way bigger than the other one it would affect how long it was gonna take so yeah so that has kind of been it for Big O notation introduction I hope that you guys at least have some sense of kind of the speed of some different algorithms how you can kind of look at them and getting a very brief idea of how long they're gonna take this is super useful and I highly encourage you guys to be critical of yourself if you're writing efficient code or not if you're writing code that runs an N squared just know it's gonna take a long time to run right if you're writing something like this that happens in 2 to the N understand that that's gonna take a massive amount of time to run and if you're writing something like binary search you're gonna be like well if I have massive list it might make sense to use a binary search on them because it's gonna be so much faster to find elements right so as I wrap up this video just a reminder that this video was sponsored by Microsoft if you're not already you guys should head over to their YouTube channel and hit subscribe and remember to check out dev collective in the link in the description ton of awesome courses completely for free I know you guys are gonna find a lot of value from that so with that being said thank you guys for watching the video and I will see you again in another one [Music]
Info
Channel: Tech With Tim
Views: 7,316
Rating: 4.951807 out of 5
Keywords: tech with tim, microsoft, microsoft partner, time complexity, big o notation tutorial, code efficiency, algorithm analysis, big o tutorial, bigo tutorial, time complexity of algorithms, computer science, big o notation
Id: goWkNKHXkMg
Channel Id: undefined
Length: 45min 41sec (2741 seconds)
Published: Mon Apr 01 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.