AI Hand Pose Estimation with MediaPipe | Detect Left and Right Hand + Calculate Angles

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] what's happening guys my name is nicole estrella and in this video we're going to be going through advanced hand pose tracking so first up what we'll do is we'll be able to detect our left hand versus our right hand and then we'll also go through some kinematics so we'll actually be able to calculate the angles within our hand so and we'll actually do this for multiple joints as well so we'll actually loop through and be able to calculate multiple joint angles let's take a deep look at what we'll be going to so in this video we're going to be going through advanced hand pose tracking now in order to do that we're going to be leveraging the code that we wrote in part one of the handpost tutorial and i'll include a link to that in the description below so what we'll do is we'll pick up where we left off and we'll detect our left hand versus our right hand and this is actually pre-built inside of the media pipe library so we won't need to do too much it's just really pre-processing our results then what we're going to do is we're going to write a function to be able to calculate the angles within our hand so specifically if i curl my finger like this we'll be able to get the joint that is actually represented here and we'll also represent those in real time so we'll actually render the angles to the screen now the cool thing about this is that you can actually repurpose this particular line of code and actually calculate a bunch of different angles so there's a function or the function that we'll write will actually be able to handle multiple sets of joints let's take a deeper look as to how this is all going to fit together so first up what we're going to be doing is leveraging our hand pose code from part 1 of this tutorial and i'll show you where to get that code then what we're going to do is leverage the media pipe results that we've got to be able to detect our left hand versus our right hand and then last but not least we'll calculate the angles between the different joints in our hand and again render those to the screen using numpy ready to do it let's get to it alrighty guys so in order to go through advanced hand pose tracking tutorial there's really going to be two things that we're going to do now the reason that there's only two is that we're going to be building off part one of this tutorial now if you haven't seen part one of this tutorial by all means do check the link in the description below you'll be able to get access to that tutorial but i'll also include a link to the code that we're going to be using to kick off from now in this case what we're going to be doing is we're going to be going on down and we are going to be doing two additional things to our baseline code so we're going to be detecting our left and our right hand and we're actually going to render those results to the screen and we're also going to be calculating the different angles within our hand so this is going to allow us to calculate any three angles or use any three points to be able to detect those angles now this code that i'm working with here is the code from part one of this tutorial now that code is available at github.com forward slash knickknock knack forward slash media pipe hand pose so in that particular block of code we did a couple of key things so specifically we installed and imported our dependencies then what we did is we drew all the different landmarks within our hand using a real-time feed from our webcam and then the third thing that we did is we actually output our images as well so from the results and from the detections that we got from our webcam we were able to write out those images to a folder within our desktop in this case we're not going to want to write out those images because we're really focused on those two additional components that we're going to be adding now let's take a look at what we did in that first tutorial so again we imported our dependencies so if i run that and we set up media pipe so we run that and then if we run this big block of code here this is effectively going to give us our hand pose model so this should ideally bring up a little python pop-up and you should see the results there okay so we've got our python pop-up so you can see that it's a real-time feed from my webcam now if i bring up my hand you can see that we get all the different landmarks from our particular model so that's all well and good now what we want to do is really extend this so we want to be able to detect that this is the left hand and that this is the right hand now a thing to call out before you do go through this tutorial is that the detections aren't always hyper accurate for the left and the right hand so i'll show you how to do it but again keep that in mind so the and these results are actually coming from the native media pipe model now you could build up on top of this and build a classification model to improve on that accuracy but i figured i'd show you this method to begin with by all means if you do want to see a different tutorial on how to do it slightly differently let me know in the comments below now enough playing around let's close this down and actually start building up our what are we going to do left and right hand mark detection now the cool thing about mediapipe is that it actually does have native left hand and right hand detection so if we type in results this is actually coming from our results variable that we get from our media pipe hand pose model which is this here now we should get some results but i think i don't know if i put my hands down so it looks like results dot and if we type in multi-handed landmarks okay so we don't have any results now the reason that we don't have any results is that we put our hand out of the frame so this results variable is going to be the result from the last frame that media pipe had a chance to actually go and detect so if we go and run this big block of code again let's let it pop up i'm just going to keep my hand in the frame so if you get a little gray pop up and then it closes when you do run this just run the cell again and you should get your result cool all right so that's my hand that's all well and good i'm going to keep it in the frame the entire time and this effectively means that for our results variable is it's going to represent the last particular frame that it's actually gone and headed and detected so if i hit q now and scroll on down to results again we've still got a results variable and if i type in dot multi-hand landmarks you can see that we've now got all of our different landmarks right so all of these represent different landmarks now if we actually wanted to right so we can actually let's just take a look at how many results we've actually got so it looks like we've only got a single result back but if you get multiple hands normally you'll get multiple results in this case we've only got one so we've got that now the cool thing about this is that you can actually go and extract these specific landmarks for a specific joint so if we take a look at this particular landmark map right so we know that joint zero is going to be our wrist now if we type in mp underscore and start hand landmarks or hand landmark.wrist let's just take a look at that so this should be dot landmark so this is actually going to allow us to actually extract a landmark for any particular joint that we've got within that particular detection so in this case what i've written is results dot multi-hand landmarks and then we're grabbing the first result which is for our first hand dot landmark and then we're passing through the map for our wrist so this variable here is really just going to be variable 0. so if i bring that out so this is just giving our result of zero now if we wanted to grab a different landmark so say for example we wanted to grab the pinky tip right so if we just change that variable so dot pinky tip that's going to give us the coordinates for our pinky now the reason that i'm showing you this is because when we go and render our left or our right hand text we're actually going to render it on the wrist component so you'll actually be able to see left and right rendered right here or printed out here using some text now the nice thing about the results that we get from the hand model is that we also get whether or not it's a right hand or left hand so if i type in results dot multi-handedness this is going to give us which hand is actually being detected now in this case i actually had my left hand left so you can see it's gone and printed out the label left which is all well and good so what we now want to do is write a function which allows us to map through the multi-handedness which is this over here to the actual hand that we've got detected now the reason that we're doing this is because the index that we get back from the results.multi-handedness array is not always in order so ideally we want to make sure that we get the right coordinates with the right classification so let's go ahead and write this function so it is a fair bit of code but again we're going to go write it out go through it and then we'll actually apply it so let's go ahead and write this out okay so we've written four lines of code there so we'll pause there and let me explain what we've got so far so what we're doing is we're creating a function called get label and again this is going to return the text that we actually want to render for that particular hand as well as the coordinates for where we're going to render it so we're actually going to get two variables out of the result of this function we haven't written that yet but we're going to get to it so our function is called get underscore label and then to that we're passing through three different variables so we're passing through the index and this is effectively going to be the number of the detection that we're working with so remember if we detect multiple hands out of this results dot multi-hand landmarks variable we're gonna get multiple landmarks or multiple sets of landmarks so in this case you can see that when i go and grab point zero or index zero that's for our left hand but if we had a right hand in there as well we'd have a second result inside of that as well now in this case we're first up going to pass through the index that we want and that is whether or not we're passing through the zeroth detection or the first detection or the second detection so on and so forth then we're actually passing through our hand landmark so this hand variable is going to represent our landmarks or hand argument and then the third thing that we're passing through is our result so the complete results variable so that is going to include both the multi-handedness results as well as the landmarks for both of our hands or multiple hands then what we're doing is we're creating a new variable which is called output so output is going to be the final variable that we actually go and push out as a result of this function and we're initially setting it equal to none then what we're doing is we're looping through all of the results that we get inside of our multi-handedness variable and in this case we've just got one result so we're going to be looping through once so to do that we've written four idx which represents our index comma classification in enumerate and then to that with part three results dot multi-handedness so effectively we're going to be looping through this over here then what we're doing is we're actually going into the classification so imagine we've got this over here so it's effectively going and grabbing the first variable and grabbing that dot classification classification and then we're grabbing the first variable there and then we're grabbing the index so it's checking if this index matches the index for our hand so in this case remember we detected one hand and our index is index zero so in that case it'll be true so it's effectively doing this check it should be num so inside of our big function over here the index for the detection is represented as a variable called num so effectively we're going to be doing this so in this case it's going to be true then we're going to go on ahead grab our text grab our score and then grab our coordinate and then push that out as a result of our function so let's go ahead and finish this off and then we'll actually go on ahead and apply it okay so that is our function now done so what i went and did is i wrote one two three four five six lines of additional code now let's go through and explain what we've actually written here so the first block of code is actually processing our results and grabbing our label and our score out of our multi-handedness variable so again let's take a look at that so results dot multi-handedness so it's effectively going to be grabbing this variable and this variable over here now to do that with written label equals classification dot classification zero so this grabbing our first variable so if we grab that and grab dot classification got that so it's effectively doing that and then grabbing dot index and then grabbing dot score and then storing those in variables called label and score then what we're doing is we're doing a little bit of text formatting so we're going to take our label and our score and we're going to convert them into one big string so to do that with written text equals quotes and then a set of curly braces space another set of curly braces and then another quote got format and then two format will pass through two key arguments so we've passed through the label which is this over here and then we've gone and rounded our score because by default we're going to get a score which has a number of decimal places so if we actually round that by using round and then comma two this is going to give us a single value which is a little bit slightly more formatted so to do that over here you can see that we've used round pass through a score and then pass through comma two and then the next thing that we've done is we've gone and extracted the coordinates that we actually want to render this at now remember up here i was sort of showing you how to actually grab the wrist coordinate this is exactly what we're doing over here so we're grabbing our hand result grabbing the landmark and then we're passing through that we want our wrist coordinate so to do that this should actually be mp underscore hands type it there so mp underscore hands dot hand landmark dot wrist and then we're grabbing the x coordinate and then we're doing the same thing hand dot landmark and then to that we're passing through the index so mp underscore hands dot hand landmark dot risk and then we're grabbing the y coordinate and we're storing that inside of a numpy array then what we're doing is we're multiplying it by 640 comma 480 so these are the dimensions of our particular webcam but you can extract your own or change this if you need to so np dot multiply we're multiplying by that if you've seen the media pi holistic tutorial where i do whole of body tracking this liner code is going to be really familiar because effectively what we're doing is we're grabbing the coordinates from the hand pose model and then multiplying it by the dimensions of our webcam to be able to get the appropriate dimensions to print out our results of the screen so once we've got our coordinates we should get back a coordinate array which is two values our x and our y variable we're then going to append those into an output variable so output equals text comma codes and then we're returning our output so ideally what we should get out of this is a string which is looks like this so ideally will be our label and our score plus our coordinates let's go ahead and test this out so if we run this looks like we've got an error there this should be equals equals index and then if we run get label so get label pass through num which is the detection that we're actually looping through up here so that's up there so again if you want a deeper walkthrough as to how we actually set up that looping uh by all means you check out part one of this tutorial so we're going to pass your num to that hand and then results and ideally we're getting back our results rate now it looks like we've got a little bit oh okay this is the wrong way around so what we should be getting out of here is i believe it's let's just double check that there's a label yes label cool that's better so you can see out of this function what we're getting back is the hand that we've actually detected the score and then we're getting the coordinates that we want to render it at now what we actually need to do is apply this function to our big blocker text so let's go ahead and copy this and paste it here now what we're going to do is we're going to pass through this get labels function this over here just under where we're rendering our results now what we need to do is we're going to first up get the coordinates so we're going to get the label and the coordinates and then we're actually going to use the cv2.put text method to actually apply that to our image so let's go ahead and do this and then we'll see our results okay i think that is all well and good so written three new lines of code there inside of our big block of text and really these three lines of code are going to allow us to render a left or right hand detection so to do that first up we're checking if we've actually got any results so remember we initially set our output to none so if we don't actually have a results or a mapped result we're just going to get nothing back so first up we check if we do have something back so if get underscore label and then we'll pass through our different arguments so num comma hand comma results so this is going to be the number of our detection the actual hand landmarks which is the same as that and then our full results variable from over here and so assuming we do get some results back then we actually go and run our get label function again but this time we're actually unpacking our variables that we're getting back from our get label function so we're going to get back text comma code and then to actually extract those we're using the equal sign and then we're running the function so get underscore label passing through num passing through hand passing through results again and then assuming we've successfully got our checks and our coordinates what we're then going ahead and doing is we're using the cv2 dot put text method to actually render that to our image now to that will pass through a bunch of different arguments so the first one is image second one is text so first up we're actually passing through the image that we're working with then we're passing through the text and this is going to be left and then the number one dot zero so that's our rounded score then we're passing through our coordinates that we get back we saw that a little bit earlier and then we're passing through some standard cv2 requirements so passing through the font font size font color the line width for our particular detection as well for our particular blocker text as well as the line type so to do that we're in cv2 dot font underscore hershey underscore simplex font size of one pass through our color variable which is in the format of bgr in this case we've set it to 255 255 255 which will be white and then we'll set our line size to 2 and then set our line type to cv2 dot line underscore a8 so all things holding equal we should now be able to get our left hand and our right hand detected so let's go ahead and run this so i've just hit shift enter so ideally we should get this running cool so hand so we've got our face detected now if i go and put my hand up you can see that we've got our left hand so if i bring up my other hand it doesn't look like i've got anything detected there both hands got left and our right hand detector pretty cool right so this sort of takes it one step further now in this case we're getting pretty accurate detections but sometimes what you'll actually notice is that it mixes up the left hand and the right hand in this case we're all well and good if you do get less than better detections there is another way that you can actually go on ahead and do this so you can actually perform some custom classification and but in this case we're looking pretty good so we've got left and now right hand detected so our left is rendering with a score of 100 and our right is rendering with a score of 100 as well so you can see that there we can bring down the green screen and test that out again right so again we're getting our left hand we're getting our right hand we're looking good if we turn our hand a bit you can see that our right hand score is changing left hand left hand staying pretty strong cool so that's the first part done so we've now gone and detected our left hand and our right hand now the next thing that we actually want to do is actually go on about and calculate our different angles now again if you watch the media pipe holistic tutorial you would have seen how we actually went about calculating those angles this time what we're going to do is calculate multiple angles so we're just going to do a bit of a loop to be able to do that so first up what we need to do is install matplotlib so this is going to give us the ability to test out our model before we go and run it so i'm just going to run exclamation mark pip install matplotlib and it looks like i've already got that installed so we're good to go then what we'll do is we'll actually import it so import us from matplotlib from map plot lib import pi plot as plt so that's going to import the chord dependency so pi plot to allow us to actually render out our angles when we're testing out so to do that written from matplotlib import pi plot as plt cool now we're all well and good now the next thing that we actually need to do is determine which joints that we actually want to render so if we go back or which angles we actually want to calculate so if we roll on back over to here so we can actually calculate the angle between any three joints so in this case you can see we could calculate four three two so this angle here you can calculate 8 7 6 so that angle there you could even calculate this so 5 0 1 so you could calculate that angle there as well now in this particular case what we're going to do is calculate a few different angles so let's actually scroll on down and we're first up going to set up a variable which is going to store which combination of joints we want to render so let's go ahead and do this okay so first up what we're going to do is we're only going to work with two angles but then what we might do is we might add another one later on so in this case we're going to calculate angle 876 which is going to be uh eight seven six so this angle here and then i think we'll pass through 12 11 10. so 12 11 10. so we're going to do that angle as well so it's going to be the first finger so this one and then this one so we'll calculate this angle here and this angle here so ideally when we put our fingers down sort of like this you should be able to see the angle calculated just there again that's probably really hard to see but you sort of get the idea this angle here so let's go on ahead now and again similar to what we did for part 4 we're going to be writing a function that allows us to do this so let's go on ahead and write up this function so in order to do this we're first up going to start by defining a function so d f draw underscore finger underscore angle so this function is actually going to render directly to our image so we're not going to need to go and do any additional rendering it's actually going to do it all for us then to that we've passed through three arguments so image comma results comma joint list our image is going to be the image that we're working with from our webcam the results is again going to be that big results array which includes the multi-hand landmarks and the multi-handedness variables or results and then we're going to pass through our joint list which is going to be this so let me just i don't think we actually explain that jointly so our joint list is just an array with sets of arrays inside of it so again if we wanted to grab our first joint value so joint list 0 that's going to be 876 which is going to be that first finger if we grab the second one that's going to grab 12 11 10 which is going to be the second finger as well so again if you wanted to add multiple different joints or multiple sets of joints you could just append to this joint list and we'll be able to get that so that is the beginnings of our function now defined now what we're actually going to do is we're going to loop through each joint and then actually calculate it and render it to the image so we'll actually be able to see the results in real time so let's go ahead and do this okay so before we go any further i wanted to sort of take a little break and show you what we've done so far so what we're first up going to be doing is we're going to be looping through our different hands so this is this line here and then we're going to loop through the different joints loop through our joint sets and then we're going to be extracting the different coordinates that we actually need so this is going to be a second and this is going to be our third so to do that we're in four hand in results dot multi underscore hand underscore landmarks and then colon so that loops through our different hands and then we're looping through our joints so this should actually be joint so for join in joint underscore list and then what we're actually doing is we're grabbing the three different coordinates that we're actually going to need so this is going to give us coordinate first second third so as long as your second coordinate is the coordinate for the angle that you want to calculate then you should be all good so in this case 876 is going to be our combo then 12 11 10 is going to be our combo so on and so forth so what we're doing is we're extracting each of those different coordinates so a b and c now to do that let me break that down so first up what we're doing is we're grabbing our x coordinate and to do that with grabs hand dot landmark and then we'll grab joint zero so if i show you what that looks like so we're grabbing results dot multi-hand landmarks and we're grabbing our first value when it doesn't look like we've got any results yet so the reason that we don't have any results is because remember we took our hand outside of our frame so let's run this keep our hand in the frame so we've got some results we can prototype with let's try that again cool i'm going to hit q alright so we should have some results to work with now so results stop multi-hand landmarks so that gives us our landmarks and then what we're doing is we're grabbing dot landmark or we're grabbing our first value because remember we need to loop through our hands first so this effectively is hand right then what we're doing is we're grabbing hand dot landmark so that's going to give us all of our landmarks in an array and then we're going to grab the coordinate for joint one so remember we're looping through each of our joints so in this case let's just set our joint to one so joint equals joint list so what we're doing is we're then going joint zero and so that is going to give us all of our different coordinates for that first initial joint now again we could change this to joint two which is effectively index one and now we're grabbing that next joint so again you can go through and loop effectively what we're doing is we're going through and looping through these now what we're effectively doing is we're grabbing the x coordinate and then we're grabbing the y coordinate which gives us that now what we want to do is actually store it inside of an array so we're going to use the numpy array method so if we do that put in square brackets mp dot array what we're going to get back is an array of coordinates now again this isn't re-factored for the dimensions of our webcam but again it doesn't need to be for now because we're just calculating our angle when we actually go and render it we need to go and do that transformation and what we can do is we can go and apply a standard trigonometry to be able to go and calculate these angles so first up we're going to calculate the radian convert that to an angle and then provide the rendering script so let's go ahead and finish this off okay that is our code done so let's take a look at what we've written here so written one two three four five six lines of code so again what this code is doing is we're drawing our finger angles using our image plus our hand results from this particular joint list up here so again first up what we're doing is we're looping through each hand in our results then we're looping through each set of joints in this joint list here and to do that we're grabbing our a value our b value and our c value so a is going to be our first coordinate b is going to be our second coordinate and c is going to be our third coordinate now really all we're doing here is we're grabbing the dot landmark out of this results value here results.multi-hand landmarks then we're grabbing joint zero which is effectively in this case say we take a look at our first joint list or joint value in our joint list it's going to be eight then we're going to do the same to grab b it's going to be 7 and then we're going to do the same for c in which case it's going to be the value 6. so that's going to give us our different coordinates for 8 7 and 6. so this will be a b and c now what we're doing with those three values is we're using our standard angle calculation formula so first up we're creating a variable called radians and we're setting that equal to np dot arc tan 2 or passing through c1 which is effectively our y value for our c coordinate then passing through our y value for our b coordinate then we're passing through our x value for our c coordinate and our x value for our b coordinate so this is just a little bit of trigonometry so i got this from the baseline code for the angle calculation in android from the official media pipe code but i tweaked it to be able to work with python then the next part to that line of code is minus arctan 2 and then instead of passing through our c and our b coordinates we're now passing through our a and our b coordinates so a one so this is effectively y for our a coordinate minus b one which is y for our b coordinate comma a zero minus b zero which is effectively x for our a coordinate and x for our b coordinate and then we're passing our radians value to this next line so angle equals mp.abs which is effectively going to convert this to an absolute value radians multiplied by 180.0 divided by np dot pi which is our pi value and then because we don't want our angle to be greater than 180 degrees it's going to be a straight angle we're just saying if angle equals greater than 180.0 convert it to its sub 180 equivalent so angle equals 360 minus angle and then we're going in ahead and rendering it to our image so cv2 dot put text and then we're passing through a bunch of values to that so first up we're passing through our image we're then passing through our rounded angle we're then converting our b coordinate which is up here to a value which has been refactored for the dimensions of our webcam so again really similar to what we did over here so cv2.tx so remember we've got our coordinate inside of our get label function and then we actually use the cb2.text method over here again down here we're using cv2.put text but in this case because we've already got our coordinate extracted for b we're just running tuple and then we're passing through np dot multiply b coordinate multiplied by 640 comma 480 converting that to an integer dot as type int and then we're setting the other parameters that we need so cv2 dot font underscore hershey underscore simplex passing through our font size setting our font color which in this case is going to be white again and then passing through our line size and cb2.line aaa so our line type so let's just undo that so that should be all well and good now so if we run that no errors and if we just delete those two cells so let's actually go on ahead and test this out now oh and then the last line of code is we're actually returning our image that we've drawn on so return image now if we go on ahead and test this out ideally we should get back an image which has our different angles calculated onto it now this is the reason that we actually install matplotlib because we can type in plot.i am show image and this is going to render our image which doesn't have our coordinates rendered on it at the moment now in this case it's rendering in a weird color and that's because you need to recolor whenever you're working with opencv so to do that you can just type in cv2.cvt color pass through your image comma cv2 dot color bg r2 rgb so this should look a little bit better now and it does so that's our proper true color so again this cv2.cvt color allows you to just recolor your images so that's well and good but remember we haven't actually gone and updated our image yet with our angle calc so if we go and pass through or run this drawfinger angles function we should be able to get our image with our different angles now calculated now again it's probably going to be a little bit small but that's fine so let's do this so test underscore image we'll create a new variable equals draw finger angles and then we're going to pass through our image which is this down here pass through our results and pass through our joint list which is this so that looks all well and good if we type in test image you can see that we've got our image looking good for now and now if we change our plot.i am show method to rather than rendering our baseline image render and test image you can see it's really really small there but we've actually got our angles calculated so 177.42 and 178.09 so we've actually got our angles calculated so this is just a bit of a test run now what i actually want to do and if you want to get rid of this line here you can just type in plot dot show so what we actually want to do is rather than running this on a single image we want to do this in real time so what we'll do is we'll copy this code over again so this is the code from step four and we'll paste it down here and then we're just going to add in another line of code just below where we rendered our left or our right detection so let's go ahead and do that cool so that is our last line of code written so to do that what we've gone and done is in line with our for loop over here we've just gone and read and draw underscore finger underscore angles pass through our image our results and our jointly so you can see it's only indented in line with this for loop here so it's in line with that so now what this should do is effectively render all the different angles from our coordinate list which is our joint list up here so ideally we should get the angles calculated for 8 7 6 and 12 11 10. so let's go on ahead and test this out now so all things holding equal this should run successfully so we'll now have our left and our right hand detected but we'll also get our join coordinates and you can see that that's doing it so it's all in real time so you can see we've got 178 and if i curl over you can see it's changing that number there pretty cool right and it's going to be doing it for both hands so this is so you can sort of see that it's not always getting our left and a right hand so i found that if you do that weird hand well it actually resolves that but again you can start to see our different angles calculated there so you can see that to the screen so we've got that and if we curl in they're re-rendering pretty cool right there's a thumbnail if ever i saw let me smile for it all right now what we want to do is so right now we've got two two sets of angles calculated so this one this one this one this one rather than just doing those two let's go ahead and add a new one so you can see how to actually go on ahead and do that so if we quit out of this so we've gone and done eight seven six and twelve eleven ten let's go ahead and add another one so what's one the one that i saw that looks really good it is four three two so four three two is going to be this coordinate here so for our thumb so it'll be that right so four three two so let's go ahead and add that and let's actually go and add another one screw it so let's do i think that'll be good so let's do one zero five where's that joint list again now so all i'm doing is i'm now adding additional values to our joint list array so i'm gonna add one zero and five so remember we first up started out with just eight seven six and then twelve comma 11 comma 10. now i'm adding 4 comma 3 comma 2 inside of an array and then 1 comma 0 comma 5 to give us those new angles so if i run this joint list again so now if we take a look at our joint list we've got four different sets of coordinates and if we take a look at value zero it's going to be eight seven six value one it's going to be 12 11 10 value two it's going to be four three two and a value three it's going to be 105. now if we go and run this block of code again because remember we're now looping through our joint list inside of our drawfinger angles method we should get all of our new angles drawn now if we actually test it out on our test image let's actually do this and we don't have an image rendered why is that that should be fine let's happen there none type object is iterable oh because remember we would have taken our hands outside of the frame so we won't actually have anything in results dot multi-hand landmarks that's fine cool let's just go and test it out in real time and see what it looks like so let's go on ahead and run this all right so we've got the grey pop-up if that happens just run the cell again cool so we've got our pop-up now and look at all the angles oh so the one on our wrist is over obstructed by the left and right hand thing but that's fine if we went and commented that out but you can now see that we're calculating all of those angles right so it's pretty cool so just automatically loops through and renders all of them sweet and you can see i can let's go and comment out the left and right thing so we can actually see that so if we comment out this actually we just need to comment out this line so if we run that again so if we just comment out the cv2.put text method under the render left or right detections this should allow us to see that angle there so let's go on ahead and run that again and again so we've got the gray pop-up boom that's all of our angles calculated and you can see if we open up our hand that's going to give us the angle near our wrist how cool is that so you've got all of your different angles calculated in real time again post for the thumbnail so you can see that you can mess around now so you've got all of these different angles calculated pretty sweet right if we go and move our hand around all of these are recalculating awesome that's probably probably enough me messing around but you sort of get the idea so you can do so much with this in this case we've gone and done our left and our right hand detection and now we've gone and applied a custom angle calculation pretty cool right so let's take a quick recap and see what we've gone ahead and done so we've now gone through step four so we've detected our left and our right hand landmarks and again sometimes it might be a little janky but you can play around with it and swivel your hands around and it tends to work okay um there are different methods to do this but i figured i'd show you the native one for media pipe and then the second thing that we went and did is we calculated our multiple sets of angles now remember you can go ahead and update your joint list to be able to render different landmarks if you'd like to so to do that we went and created a new method called draw underscore finger underscore angles and we applied it to a big block of code but on that note that about wraps it up thanks again for tuning in guys hopefully you enjoyed this video if you did be sure to give it a thumbs up hit subscribe and tick that bell so you get notified of when i release future videos and again let me know how you went with this and if you had any problems by all means hit me up in the comments below thanks again for tuning in peace
Info
Channel: Nicholas Renotte
Views: 36,075
Rating: undefined out of 5
Keywords: hand pose estimation, hand pose estimation python, mediapipe tutorial, mediapipe sign language, mediapipe python tutorial
Id: EgjwKM3KzGU
Channel Id: undefined
Length: 42min 13sec (2533 seconds)
Published: Sun Apr 25 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.