Best AI Motion Capture 2021 - OpenPose vs DeepMotion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
have you ever just wanted some good and accurate motion tracking but you just couldn't really afford to buy a bodysuit for a one-off project fear not today i'm going to run down the two best options for ai motion tracking which does not require a suit and only needs one single camera to help you track your emotions from the most popular open source option open pose with a staggering 21.6k stars on github to a popular paid service called the deep motion we'll see which actually performs better under this no suit one camera angle the absolute minimum condition for those of you who don't know most of the motion tracking research or projects are mostly built upon open posts such as free mocap easy mocap frank mocap which i covered a while ago they all serve slightly different functions too so i would say it's more fair to evaluate the og since it contains the most and is the best among all of the others some other more recent research may actually be better than the original open pose but practicality wise i think open post has the most appeal deep motion on the other hand is today's sponsor i've been granted the power of destruction once again so i'll be brutally honest with you about their service in contrast to open posts which you can use for free right off the bat the most obvious pros and cons between the paid service and an open source research is that through uploading your data to their website the motion does all the hard computational work for you so it doesn't matter if your computer is a potato as long as you can upload a video you'll be able to get what you want it's just that it ranges from free to a monthly paid subscription which then you would need to pay in some cases open post on the other hand is totally free however you will need to have high tier computer specs in a nice nvidia graphics card to be able to run it smoothly setting open pose up can be a hassle for a non-coder too and exporting the desired file types and information can be really difficult the official open post doesn't support any 3d files generation such as pvh glb or fpx it can only generate 2d annotations which can be quite useless in some cases there are some implementations such as motionnet that can estimate 2d annotations into 3d but without a linux machine it's too complicated to set up so what i'm going to do is to compare the rendered video from open post mostly against the bvh file in blender that deep motion produces since the motions video is pretty laggy due to some online 3d engine issues it may be slightly harder to compare and evaluate but differences would still be obvious in some basic motions you can't really see any major differences between the two of them the deep motion may seem a bit laggy but that's because the engine that the website renders in is not very stable if you use the bvh file it provides and import it into blender it will look much better compared to this i notice open source has a lot of instability around the pelvic it constantly moves up and down even in resting motions in some standard movements that face the camera pelvic instability is much more obvious for open post especially when the video is in lower quality while this video has some pretty intense and sudden movement both motion capturing ai was able to overcome it and generate some pretty on-point arm movements however when there are frames where the arms are not visible open pose's arm location would just be placed inaccurately since it cannot find the arm while the motion would estimate the location where it should be in some less typical human motions like the handstand the motion performs better in keeping the body consistent compared to open pose even though it still twitches a lot open pose fails to focus on the right limbs it seems confused which way is up and which way is down so the legs were the arms for quite some time deep motion only got the wrong orientation once and failed to register the landing leg i would say the motion works much better on non-typical edge cases this also kind of shows how open pulse operates in a more frame based motion capture while deep motion has an overall coherency in mind it is able to cover potential motion capturing errors what is even cooler is that the motion provides this motion smoothing option where it helps you remove jitter with its ai filter so if you turn it up to 1.0 which is 100 it will literally smooth out all your movements so now this just looks like it's in slow motion you definitely have to play around with it to fix any jittering issues that's on deep motion i think this tool has its potential and can come in really handy other than this motion smoothing option since the motion is generating an estimation of how the character in a 2d video moves in the 3d environment there is a third dimension that is not accounted for from the visual data it's hard for a computer to tell if someone is lifting or doing a pull up bar does the ground stick to the feet or not in this case the motion has an option called foot locking which lets you specify between auto never always and grounding these various options will assist the motion capturing and make the most accurate body tracking and feed lending that will benefit you if you are using these in other 3d environments like blender they all have their pros and cons and it's dependent on how clear the video input itself is too so i wouldn't say this is extraordinarily good but it does slightly improve the normal body tracking so in contrast with open pose the video result has some very twitchy and jittery feet movements which if you somehow successfully converted to bvh it would probably not look really good d motion has this problem covered though there's also this speed modifier which you can utilize if your input will slow down from a higher frame per second this means if a video is shot in 120 fps and playback at 60 frames per second then the footage will appear to be slowed down by fifty percent if the slowdown video is the input by changing the speed modifier to two times which is equivalent to two hundred percent the motion will not over capture the body on the slowed down video which in turn fixes overestimation on body motions you can see here the highlighted outline is with a two time speed modifier and the not highlighted part which is overlapping is using the slowed video then a lot of excessive jittering can be found especially near the end of the video where the left arm shakes rapidly while the highlighter is much more stable on the other hand open posts can include all faces and hands tracking which is really useful for deep motion there is only one option which lets you enable face tracking and animate it for default or custom 3d character you choose no specific hand tracking for now as we can see from this deep motion demo the hands aren't curving or turning properly like how the person in the video does but i think they will include it in the future the face however does a great job of tracking the right movements especially in this video where the person's face is not as obvious it is able to track constantly so i give it that while open poses have to face enhanced tracking if the features aren't obvious enough it still fails to register the correct motions also deep motion is able to just track the videos that you can only see the waist and above too even though the lower body would just mostly stay still other few functional limitations for deep motion that open post can perform such as motion capturing multiple people is still not possible too but of course there are functions that open post does not have that the motion has the coolest one is the one where you can upload or create a custom 3d character from yourself or just any image of anyone it's like designing any mmorpg characters while you are given the option of using an image to speed up the creation process does have some limited creative options such as limited clothes hairstyles colors body type and accessories but if it's not exactly what you want you are able to download the end result and modify it on any other platforms freely and you can just re-upload it onto deep motion for further use by making and downloading this custom character from dmotion and modifying them offline it also helps you reduce the need of adding a skeleton to your 3d model so i would say it's pretty convenient to some extent so when you want to create a body tracking animation you just have to go to create animation drag and drop the video file you want to use as reference choose your parameters and now wait after the job is done you can preview it on their website or just straight up download the mp4 pvh or any other file types and view it locally on your computer this convenience definitely always open poses trouble of having to choose the video manually render it on your pc set up a whole environment for emotion net and convert json annotations of the images to csv and from csv to bvh so you are able to use it under blender or other tools so it is really up to you if you want to be a freebie or just to pay your way for convenience well you don't really have to pay for deep motion because they do give everyone a free monthly animation time of 30 seconds it may not be much but you definitely could use it to do some quick complicated motion tracking that you need since it's too hard to animate it manually or just to test the waters to see how good the motion is actually let me run down more comparisons here too in this boxing video the lighting is absolutely terrible you cannot see the arms as clearly and when the boxer's right arm gets covered by the left arm open pose would usually miss the arm and go back near the spine the motions have some issues with the legs when the person's right leg goes off the screen the right hand keeps on dropping too so this is a video that is more challenging for both a eyes and here's another interesting one when the person is performing a crossover demotions body tracking actually understands that the legs are open forward and backward forming a small launch position to enable the basketball to pass through the legs viewing it sideways inside blender gives a really nice insight on how well deep motion performs on a 3d estimation for a 2d video this however is not able to be evaluated by open post and to compare it so it is kind of unfortunate and when the person is not facing the camera deep motion gives an estimation of the covered right arm while open pose does not open pose is kind of jittery with the right arm in this case and knows the arm is there but it cannot properly locate its direction overall the average time needed for rendering locally on my gtx 1080 is slower than running through deep motion online it's around a 2 to 1 ratio on the best day 5 minutes to 3 minutes 20 and worst days would be like 3 to one or even more while running open pose locally includes the json files and the images and running the motion online includes a video a pvh file an spx file and a glb file they also have tutorials on how to import those files into other third-party softwares you can check them out so if you want a really convenient and high quality ai motion tracking tool or what they call it animate 3d cloud be sure to check out the motion with the link down in the description and thank you guys for watching a big shout out to andrew and many other patreons and members that support my work through patreon and youtube if you have any questions feel free to ask on my discord i will try to reply the best i can follow me on twitter if you haven't and i will see you in the next one
Info
Channel: bycloud
Views: 40,699
Rating: undefined out of 5
Keywords: bycloud, deepmotion, no bodysuit, single camera, ai motion capture, motion capture, motion tracking, motion capture suit, motion capture camera, motion capture software, ai, ml, artifical intelligence, ai motion tracking, openpose, open pose, openpose face, hands tracking, face tracking, body tracking, ai motion, mocap, mocap ai, movement capture, mocapnet, frankmocap, freemocap, motion capture without suit, blender motion capture, blender motion tracking, motion capture blender
Id: T1vvokFnsbU
Channel Id: undefined
Length: 10min 36sec (636 seconds)
Published: Wed Aug 18 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.