Blender 2.8 Motion tracking #3: Camera tracking in depth (tutorial)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
did you honestly think you're gonna learn everything there is to know about motion tracking and blunder and only two tutorials is that honestly really what you thought well no we're gonna need a part three and maybe even a part four and five because today we're talking about camera tracking which is a big topic by itself and generally whenever your motion tracking in any program like cynthaiz Buju blender of course 3d equalizer your main goal is generally a camera track and for the people who are like you keep saying camera track I've no idea what you're even saying basically the scenario is you're filming some footage with a camera that's maybe moving through a scene or maybe it's staying still and just panning around on a tripod it's called a nodal pan so you have some kind of footage and what you want to do is using motion tracking techniques you want to extract camera path information out of that so you want to use motion tracking to find out how the camera was moving and the reason for this is that you can add 3d stuff to your scene you can manipulate the footage and using like projection mapping and all this so basically the idea of a camera track is to get camera information so that you can do stuff in 3d tear scene that's the general idea so let me show you the footage we're working with today so inside my camera track folder I have three shots since I want to go over this again and again and again because this is kind of the bread and butter of motion tracking the most important part we have this footage that I don't need volume for and in this case we're just moving through a scene the camera's physically moving through space and we want to extract the path from that in this next shot you can see that it was out of focus to begin with but I already turned these into image sequences which is something we talked about make sure you watch the earlier tutorials so I cut this out of the image sequence this part right here and also trimmed the ending a bit too probably so in this case we have the same kind of idea camera moving through space but we have very clearly defined geometry of our scene we have a floor and a wall and the wall has you know kind of this it's not an indent it's a extrusion but generally it's just two surfaces here and then finally once this finishes playing we have this shot right here and it's kind of hard to tell if you haven't had a lot of experience with this before but this is what's called a nodal pan shot or a tripod shot meaning the camera's locked in one place and is just tilting around like this I like to use the word nodal pan because tripod just implies on a tripod facing forwards so nodal pan just means it's on a node and it's panning around like this and this is kind of the same kind of methods to use to get a camera solve out of this the tripod shot but you do something a little bit differently because since your camera isn't moving through space you can't really get any depth information but of course none of this means anything until we start talking about how to actually do this so let's open up blender and you can see we're actually using the release candidate which is a big deal because a couple of days ago this didn't even exist and blender 2.8 stable is coming out very soon this is the final kind of piece of the beta for the people watching it in the future like oh you know 2.8 2.9 is already out what are you even talking about so we need to setup our project the same way we've been talking about over and over and over again so make sure you've seen part 1 and part 2 first thing we need to do is make sure in the render tab under color management we have our view transform set to standard again this is going to make sure our footage looks or our image sequence is going to look as we expect it's not gonna alter the colors using filmic mode or anything like that and then the second thing is we want our desired frame rate which again is 30 frames per second at least in my case if he filmed 60fps and turned it into an image sequence 60 frames per second 30 I just talked about it and had trouble finding it so I think that's pretty much all the setup we need to do maybe also save our scene call this part 3 okay now we've set up and we need to go to the movie clip editor which is where we do our tracking and then let's open up the shot so inside our tracking folder we have shot one sequence a to select everything and import and I've also developed some new technology by the way hopefully this works if I do this it should change where my cameras hopefully it's action doing something or else that's a lot of nonsense I guess I can check yeah that is that is working technology so just in case something appears behind my head I could just and look at that everything is solved so okay so we have our shot in here we want to make sure that our project settings are matched up right now our endpoint and frame is 250 so set scene frames does exactly what we talked about so one to 176 and let's load this into memory with prefetch okay so now everything's in memory we can play this it's playing a crispy smooth butter smooth I don't know what crispy sue they even means and this is a good time to talk about what we even want to track so again the idea of camera tracking the way we even get that camera path to begin with that motion path is we're gonna do a bunch of 2d tracks from a bunch of different spots in our footage and they're all gonna move a little bit differently relative to each other so for example if we track some feature some dot on the sidewalk here versus let's say this dot on the car since they're in different depths in the shot their X motions not going to be the same nor their Y motion if we pick two spots that are close to each other like for example this and this they're gonna have pretty much the same motion since they're locally very local and they're on the same surface so this kind of theory kind of tells us what we want to be tracking we don't want to track things that are incredibly close to each other since we're not really getting any new data from this and also we want to track things that are on different surfaces because they behave very differently and we want to get as many tracks doing different things as possible and we want those tracks to be very accurate so even if we can try to track something way off in the background that's really hard to see if we're getting it right we don't want to do it we want only accurate tracks that also last for a long time so a track that last for all the frames like from 1 to 176 is very valuable to us and another thing is that I guess we haven't talked about this before since we've just talked about tracking and the different motion model we don't want to track objects that are moving for example biker dude and phone kid in the background and maybe this car if it's moving the reason is it's gonna track but we're getting motion that isn't really representative of the camera moving again we want information about the camera if we track this person right here it's he's gonna be moving to the side the camera isn't moving to the side so we only want to track stationary features that are far parts behavior behaving differently and generally we want to get stuff from all over the frame that are also different depths that's the general gist of it so let's actually lay down some trackers and then keep talking about the theory so I'm just gonna use a fine a fine motion model as I talked about shearing and deformation is pretty much all we need we might hop into perspective tracks we'll see about that that's the crazy option we're gonna enable normalize which again makes this tracking invariant to lighting conditions so especially in the outside scenario where clouds could be going by shadows are no longer a big deal most of the time when you have normal eyes applied and then for our correlation point nine seems to be our magic number and this is just saying ninety percent confidence in each match or otherwise terminate tracking so it's gonna track one frame if it's not ninety percent confidence in a match it's just not going to do that it's gonna stop so we're gonna get very accurate trackers which again is very very important for this so for example we can track something like this dot right here we'll see if blender can grab on to that for long enough so G and then shift to adjust that and then alt s for our search box that we talked about and there's a bit of motion in this shot obviously nothing huge so I'm gonna make our search box a bit bigger so our pattern area doesn't escape as we've been talking about again 2d theory is going to be used for 3d results and then in our track panel I just want to make sure that's nice and centered we can move that a little and then let's lock on to this and ctrl T to track forwards okay so that's doing pretty well and then it stops tracking because this is looking fairly different from the initial first frame which is using as reference so we can just expand our search box to add another reference frame so from now on it's gonna track and match patterns relative to frame 149 so ctrl T and this made it all the way to the end this is a perfect tracker it's accurate as we saw and it lasts the whole shot ctrl L to lock this and the general idea is we're gonna need eight of these not the same eight but we're gonna need eight trackers to exist at any frame in our shot ideally it's the same a trackers so you can tell that this is gonna be a lot of tracking and just something I want to point out that I keep I kept referencing earlier and saying don't worry about it it's spoilers don't worry about it is that when we added this if you go to objects what's objects well now you now you're gonna learn what objects is you're gonna see we have this camera object and nothing else and when we added our tracker it went right into this camera object so you can think of this as a camera and then what belongs to this is trackers right now only one tracker if we add another tracker we're gonna have our camera and then two trackers that belong to it of course this is what we want because we want at least eight trackers on all frames that belong to this camera that we're going to use to extract the camera path out of it we can also add other objects but this is object tracking so don't worry about it you see our tracker disappeared because nothing belongs to our object but to our camera we do have something don't worry about this I'm gonna get rid of it so once this is good we can just keep tracking so again the most important part of this is not how you actually do the tracking since that's what part 1 & 2 you have 2 hours and 20 minutes of training on just how to track in general but the theory is what do you want to track in different and the first place in the first place so let's add another tracker right here and the question is why did I choose this well it's slightly well it's definitely a different depth it's further away from the camera and additionally it's at a different altitude so it's on it's in some sense on a different surface it's not on this planar sidewalk it's slightly elevated and in judi we have this one which is right in the middle and this one's kind of occupying the top left so we want some distribution in 3d and 2d and this is just a nice feature with a lot of contrast it's pretty much like a tracker marker it's a dot that was built for tracking okay so ctrl T that is looking good and it lasted throughout the whole shot that's always a good sign let's do something here we'll see if this gets occluded by the curb so like that scale this down and then control T and you can tell that at least in an easy shot like this where there isn't a lot of motion blur it's not gonna take that long if we only need eight of these but it can get pretty complicated especially when there isn't a lot of detail in your shot so again this is at a different altitude since it's not on the sidewalk so another good choice I'm now gonna do this car and it's important that this thing is not moving right it's a parked car otherwise the motion wouldn't be representative of the camera motion wouldn't represent the camera motion and another thing is that even though the rest of this car is very reflective which will make it look like the pattern is moving even though it's not it's just the reflection across the surface of the car this little light or metal thing is probably not going to change at all so it's gonna be fine and it looks like a circle so it's a tracker marker control T okay and just in case I want to review this tracker make sure nothing weird happened that is looking good it almost made it to the border margin here but it worked out fine again shift left arrow for a first frame and then shift right arrow for last frame what else can we do so for example this one right here would probably be fine I would want to make sure that it's not moving in the wind at all definitely you do not want to track leaves they're moving into wind but maybe the tip of this is gonna be fine so let's track that and it should work very well with the deformation and all that although I suspect that this bench might be a problem so let's try it okay so it made it X of course it goes out of frame and we don't want to do any offset tracking or anything we've been talking about because there's no point we can just pick a new a new tracker and be on with it we don't need to really commit ourselves to this one but it did last for 78 frames so it is worth keeping if it's something that we'll ask for under 10 frames I wouldn't track it at all control well and keep going okay now ideally I would like to get something on this tree but I don't think there's enough of detail to grab onto we can try though it's at a very different depth than altitude so would be a great tracker okay let's see how that did and looking at this area right here there's a lot of noise and to be honest I'm not quite sure if it's the exact right spot we want to minimize error because any error that we get is going to accumulate because it's using all these trackers to calculate for our camera so one weak link can ruin everything okay let's do a save and keep tracking we can pick another thing on the sidewalk as long as not everything is on the sidewalk so we have some different altitude we're gonna be fine okay Alfre lock or for centering and then control t that worked perfectly and let's get a couple more we could try something in the distance I'm not very confident that it's gonna work maybe something like a lot of this is overexposed not a lot of detail in here we could try this pull over here and again the reason we're doing a whole part three for this this is very important in the workflow of any the effects tardis so I'm really focusing in on this one so ctrl T let's see how this does that did horribly did it hold on for a while hold on until here you could cut it off here again track and then clear to the right and then you could lock this but again I wouldn't recommend keeping a track like this so let's just do EDA we want very accurate results and luckily for us we have a lot of dots all over our scene so locally we don't want it to be very local I don't even know what that sentence means so let's pick this right here instead of a dot right here and then track that forwards with ctrl T lock that in and then how many do we have so far so it looks like we have about six trackers that last the whole shot and then I know this one only makes it halfway through we can also go to we can also go to the last frame and then pick something that's in here and track backwards that's always a good strategy so let's pick something in this brick area like this little I feel like this one has more contrast basically if your eye can follow it then the algorithm should definitely be able and by algorithm I don't mean YouTube's algorithm I mean blender tracking algorithm should be able to follow it so shift ctrl T or of course hit this button right here shift ctrl T couldn't even see that as moving so wildly okay yeah I think that is good it's a shame it only made it about halfway backwards but ctrl L to lock and let's get a couple more down we could try this area that's kind of the intersection of a bunch of different lines and you know what for this one let's do a perspective track just because the affine motion model isn't the best for everything and this is really a perspective scenario we're moving further away and you can see this plane is just gonna be shifting like this kind of if you can imagine that motion in your head so inside the tracking settings we want to change this to perspective and somebody asked me in the comments I thought I clarified this but I'll say it again the motion model here is saying when I add a tracker I wanted to inherit these properties and these settings are local to the tracker so this one sets a perspective if I hit this one right here it's a fine so it's on a local per tracker level so let's scale this up with a little less search box and then shift control t ideally we start this on the first frame because like I said for some reason tracking backwards takes a bit longer I don't know I didn't design it camera tracking can be seen as very boring can also be seen as very relaxing this is one of those things just like reat apology that you want to be listening to a podcast for while you're doing it it's not very entertaining to just focus all your energy on it unless you're watching this video then you want to focus a hundred percent okay so let's see this again we would just want to focus on the center point we don't care about these four corner pin type things we only care about the X&Y data here and by the way because I haven't been mentioning this if we open up this window for a movie clip editor and loading these same footages before with graph you can see I think this is the first time we have multiple different splines graph I don't know I called it a spline we have multiple different graphs on the same graph right we have multiple different data sets on the same graph each one is of course representing a tracker and when we select them you see it's going to select each individual graphical representation of that tracker and generally you see you see they pretty much clumped together there's a bit of variation but they clump together and that variation let's say this one right here has a lot of variation you can see that it is the car tracker so of course it's gonna behave very differently on the X&Y because it's kind of not attached in terms of geometry to this stuff over here so you can expect different types of motion if we were again to track the biker we wouldn't want to do this but if we did we'd expect wildly different stuff so as long as this stuff is pretty packed together in a normal way if you see anything abnormal like for example this right here I don't know why that's happening it could be in that case because you're getting some inaccuracies around the margins as it gets out of frame so that's something we want to look out for but for now we're just gonna ignore that very soon we're gonna need to really care about these graphs so I think we need one or two more that lasts throughout the whole shot what can we grab onto something here so there's a lot of white dots yep we're back in a fine so let's control click and then pick that control T did that make it all the way through no we had some I don't know if it was motion blur issues I don't think so I think we just need to make our search box bigger and let's keep tracking there we go made it all the way through sometimes that's all it really takes okay is that all we need you know what I do see this nice-looking I don't know what that is stick either right we're gonna track it doesn't doesn't really matter as long as we can tell that this would be a good tracker marker doesn't really matter what it is actually I do know what it is but YouTube looks through the words you say and recommends videos depending on those words so if you say a certain word it actually becomes less discoverable so you you know what it is I know what it is leave it at that okay ctrl T made it most of the way through make that search box bigger you see the pattern Ariel is gonna escape that in the next frame so ctrl T okay I think we can squeeze another frame or two so what we can do is again we can reset this because it only the only thing that matters is the center point and this is just gonna add a new reference frame for us to track so alt right arrow to go frame by frame okay so we got two or three extra frames out of this which is worth it control well so in the beginning we definitely have a trackers what about the end okay so blender will tell us if we have any issues with having a trackers on at all times but we can also do a manual a manual review of this and it does look like there are eight at all times so let's try to follow through with this okay so we have all our trackers and now what we want to do is combine that data all those tiny variations even though they're kind of all moving together because it's all being affected by one camera motion we want to take that and create a camera path so we go over to this solve panel which we've only gone to for a plane track I believe so most of this is pretty new and there's a lot to talk about so the first thing you're gonna see is this tripod checkbox and if we read the description use special solver to track a stable camera position such as a tripod you want to enable this if your shot is a nodal pan like we talked about still camera but just panning around or if you're filming stuff that is way way into distance like incredibly far you're on a boat and filming an island or whatever and you're moving your camera through space effectively or not because it's so far away it can't tell any depth changes so that would also be considered a tripod shot that's just a special case keyframe is the next checkbox and this is something we want to enable automatically select keyframes when solving camera object motion so this takes some explanation as along with keyframe a and keyframe beam the way I think every camera solver and other programs to works is you give it a part of the footage so it's not looking at the whole thing you just give it a part of the footage that you say I want you to look at this specifically it's gonna use a whole bunch of different a whole bunch of math and algorithms to get a camera solved just for that tiny area it's gonna give us a camera path and then what its gonna do is using that it's gonna extrapolate to all the other frames so there's kind of an art to choosing a good key frame a and key frame B that's the small range that it's gonna calculate and then it's gonna reconstruct and extrapolate everything else so what you can do is pick a number here pick a number here and solve and you're gonna get some kind of result and you just keep tweaking this until you get the best result possible or what you can do is enable this checkbox and it's automatically gonna find a good pair of keyframe a and keyframe B I don't know if it's guaranteed to find you the best one but it gets you a good one generally most of the time so what we're gonna do is with keyframe enabled and with refine set to nothing will change this later is click this big self camera motion button and we'll see if it works so something happened we don't really know what but something happened and you see we get this solve air of 1.0 five nine this number you're gonna grow to hate or love if you're very good at camera tracking this is the most important value you want to look at to tell if you have a good camera track again the same way that when you're tracking a point it's going to be pretty much on there but it might be a pixel off you know in any direction this is a measure of how good your camera solve is so obviously you want this error to be zero you want it to be as small as possible so you need to know what's the I mean if you have no experience with this what's a good value it's a horrible value if you get something under three that means it's pretty I won't say accurately means it's getting the general idea of your camera motion if it's under three if it's under one you're now getting it into acceptable territory so here we're just about at one so this is acceptable if we're under 0.5 now you have a good solve anything other than that not that good and if you get something like a hundred a lot of the times you're gonna hit this button and get something like a hundred fifty two hundred in this case we did we picked very good trackers so we basically help the camera solve solve right the worst the trackers are the worst results you're gonna get so you want to get this number under two to begin with or under three and then you want to narrow it down to 0.5 and we'll talk about how to do that so again if we go to this movie clip editor you're gonna see a new color we add red we had green and now we have blue and this blue new graph is representing this this camera solve and what you're gonna notice is as this number drops smaller and smaller and smaller closer to 0 to 0 this line will flatten out and that's generally what we want to happen and you're gonna notice that we're getting a bit of waviness a bit of oscillation anywhere that there's a lot of motion and you can tell there's a lot of motion because we have a lot of change in the green which is the Y so the trackers are moving a lot on the Y and the X and you can see it's very stable very flat around these areas that are less chaotic so ideally we want tweak our solve until we're getting something that's perfectly flat that's the end goal so before we even talk about how to reduce this error two things need to happen I need to drink water that's one and second of all we need to talk about what this air even means it's one what what does that even mean okay so here is basically the explanation for all this so this would be a good opportunity for me to switch to bottom cam hopefully the network does work perfect so in clip display we're gonna hmm okay we're gonna disable our footage so we can only see our trackers I was debating this because I didn't want I didn't know if it would be easier or harder to see what I'm about to show you but we're also more importantly gonna enable info and we're gonna enable 3d markers okay and we can disable this lock so with this info you're gonna see that each tracker has a name so this is track number six we can see that over here track number six and I think at this point I can bring did that bring it up it did sorry that I keep checking guys I'm a bit paranoid about it okay so you can see that it has the name of the tracker it's average error and the fact that it's locked and each one of these is gonna have a different average error so obviously the tracks with lower error or average are very good so how do you actually measure the error on an individual level so you're gonna see that when we enabled 3d markers we get this green dot and this is a bit hard to explain so I'm gonna do my very best these trackers that we've been dealing with just the normal trackers where I'm always like look at the center point this is the only thing that matters this is a 2d location on the footage represented by this red and green X&Y data right that's the 2d location and this green dot is kind of the projection the 2d projection of a 3d point so all you need to know is that a tracker what we've been used to is a 2d point and this corresponding greendot is a 3d point you can see that when we zoom in very close there's a bit of distance between them they're not overlapping which means that when our cameras solve got its solution it's doing its very best to make sure that every 2d and 3d point are perfectly overlapping because that means we got a perfect match of course you're not going to get this perfection because there's tiny bits of error there's noise in the footage so it basically tries to find the best solution that's closest to all of them and this average error on a per tracker level so this as an average error of 0.785 is measuring the average distance and pixels pretty sure in pixels between these two so an error of zero means that there's zero pixels between them they're perfectly overlapping and something like let's find a tracker with the highs here so this has 1.3 that's a lot anything worse than 1.3 1.4 I think this is our worst so on average doesn't mean on every frame but on average these two points this 3d point and our 2d point are going to be very far apart which means that our camera solve isn't perfectly match to this data it's not perfectly constrained to it and you can see that there's actually a bit more distance it's actually visible so when I say that we want this number to be as small as possible this number is the average of these averages each one has an average error and then you combine all of those to get this average solve error so when I say we want this to be as small as possible that means we want our 3d points to overlap on all our trackers as much as possible with our 2d points and the better the trackers are the better solve error you get but of course there are ways to drop this even lower so hopefully that was a good explanation for all that ok so how do we drop this even lower and by the way if you don't know how I did this and for this menu T for this menu same as in the 3d viewport for example if we go to this animation and T same type of deal so how do we lower this without adding or subtracting more trackers there are a lot of methods the first one which might not necessarily work but if it doesn't work you can just undo it so let's save we want to look at these graphs and see if there's anything abnormal going on and we talked about this before just the fact that this is very off from the rest of these is just because it's tracking the car and not this area over here so not that type of error but something that just does not look right and there might not be any to be honest it will be very obvious if there are okay so let's just do a test this might not do anything so let's say that we like this except for the very end it seems to have some very weird behavior again that's this right here what we want to do is cut it off a bit early so we're going to use this bun right here which we talked about dis clears everything to the right of this frame so let's clear it so we get rid of that little tail and we can solve again using this new data set it's only slightly altered and see if anything changes so we're gonna disable this checkbox which means it's not gonna try to find a new combination of keyframes that again it's gonna reconstruct from we're gonna keep it on one and 43 so now when we hit this button the only difference we're gonna see unless something horrible horrible happens the only difference we're gonna see is because of this tiny change we made so let's see might not be any different okay so before let's see before we add 1.05 9 we deleted this this little tail solved again and now we have something that's just barely just barely lower but it did help because now we have a smaller pixel solve error which means on average everything is overlapping more nicely so that did help but because there isn't much of these to actually account for this isn't going to be very good so the next thing we need to do and this is by far more important one of them what I'm about to tell you is we need to give blender as much information as possible about our camera so for example did we film with an iPhone did we film with one of those red epic cameras that are like $20,000 did we film with a DSLR it's important because we need to tell blender what lens we were using how the sensor worked all this stuff tells us information about the camera that connects blame this tracker motion we're seeing here okay so let me get a bit of water and I'll explain this so generally generally especially if you download footage you don't know a lot of the details about your camera in my case actually don't know most of the details either so we can see what we can actually of what we can actually recover is what I'm trying to say so if we go into this camera tab you're gonna see a whole bunch of stuff so sensor with pixel aspect optical center also in the lens we have a whole bunch of stuff we want to fill in as much of this that we actually know because you know we filmed it do we know the focal length how much we're zoomed in do we know the sensor with the size of the sensor inside the camera do we know the optical center this one's a bit confusing basically your lens is a bunch of pieces of glass and if you think of light as a cone coming in or out of the lens it's not perfectly centered the way you'd like it to be it's a bit off sometimes which means that focal length is measured based off this ray if you if you don't care it's not that it's not that important there's automatic ways to account for this but just know that optical centers saying where is the if you take light and shoot it out of your lens where is the center of that going that's all that means and right now it's set to these numbers because our footage is 1920 by 1080 well look at this 1920 divided by 2 gives us the same number 1080 divided by 2 gives us the same number so by default it's saying it's the center of the footage that's what is a reasonable guess it's only going to be a bit off from that depending on how bad your cameras or your lenses if your lens is really bad it's gonna be very off-center okay so if you know some of this great if you know what camera you used you can use one of these presets so for example you could you know say you know it's a iPhone right here you could click it and it will automatically change some settings and there's actually a good chance if you're using a pretty mainstream camera that it will be here but we're gonna assume that we don't know what it is so none of these presets work ok so how would we should we figure something like this out so the first and most important value is the focal length again how zoomed in are you so you can change the focal length on your lens to zoom in and out so something like a sixty millimeter means very zoomed in something like five millimeters is like gopro wide-angle well basically if we change this value and solve again if this number drops it's most likely more accurate because by changing our focal length we got something that's more accurate or we got to solve that more accurately represents our camera motion so must be that the focal length is closer to the true focal length what I'm saying is we change the focal length experiment and see if it reduces our solve error so for example we can change this to 25 solve again again we're gonna turn off keyframe so it keeps the same keyframe a and B and it got a lot better point seven-five so we're definitely headed in the right direction let's keep increasing keep increasing it could only get worse or better 26 okay that got even better point 5 9 again under 0.5 you have a very good camera solve under point 1 spectacular you're not gonna get under point 1 okay let's try 27 and solve that ok so now we are getting into worse territory I'm sorry about my voice I'm probably gonna have to stop recording in a bit and come back because my voice is going okay so this was actually worse which means we need to go back in the other direction so 26.5 ok it's getting slightly better you get the idea so instead of playing this hot and cold game where hot means you're getting better and better and better cold means you're getting further away what we want to do is automate this and that is in the refine option so we can have it refine for a focal length and just like keyframe enabling this found the best or one of the best pairings of keyframe a and B that gave us a low solve err refining for focal length is gonna try a whole bunch of different focal lengths and give us one that gives a good result so let's save and do this okay so it seems like the best one was around where we were 26.1 for so we got a solve error of 0.5 8 9 8 which is much better than our initial like 1.05 that almost halved it so that is a substantial upgrade ok so what's the next thing we can modify well we can change the optical center like we talked about another thing we can change is the lens distortion if you're using a GoPro you're gonna wanna you're gonna want to listen in on this one because lens distortion if you didn't know is how much your image is being curved and distorted based on the shape of your lens so as you know a GoPro has this widescreen very iconic fisheye lens kind of look to it where there's a lot of lens Distortion and for example if we have a tracker near the edge of the frame like this one right here it's motion isn't going to be very accurate because a lot of that's going to be because of the curvature of the image you know it's a flat image but it's kind of curving where it shouldn't be and a way to tell if you actually have lens distortion is let's say you have a straight line in real life so if you look at this if you go outside and look at the street from here this point to this point this line is straight in real life but if we look at it here maybe it's slightly curved and that's saying there's some lens distortion and we need to undistorted so that it strains out so you can change your K 1 K 2 and K 3 values to do this that's basically defining different types of curvature so you can have fisheye distortion barrel distortion they're basically different types of concavity I will adjust these by yourself instead we are done with focal lengths so now we can only adjust K 1 and K 2 which are the main two types of distortions K 3 is negligible and I think cynthaiz lets you do k4 k5 very very negligible okay so let's see if we get a better self so 0.5 8 9 8 and now we're down to point 5 5 8 6 ok so we didn't see anything happen we know it found some lens distortion right we have some K 1 and K 2 values how do we actually see that it calculated lens Distortion well what we can do moving my camera down is that inside clip display we can enable render undistorted it's very subtle but I'm gonna toggle this on and off and I want you to look at this footage over here so you can see that it's kind of unbending the image so render undistorted is showing us the undistorted images got rid of the lens distortion and when it accounts for this of course it's going to give us a lower solve error okay so the last thing we can do is again optical center and this one sometimes breaks a camera solve sometimes it improves it a lot so we're gonna do it if it doesn't work it doesn't work so actually what I'm gonna do is I'm gonna undo our lens distortion so at this point we got back to our solve error without accounting for lens distortion and then our refine we are gonna do we are gonna do all four so we already have our focal length set but maybe you can find something a tiny bit better so this is focal length optical center K one and K two hopefully doesn't break our solve but see and there you go so it changed our optical center very slightly so from 960 to 974 and from 540 to 5o about nine right so change this a bit and now we have a solve error of 0.36 to four this is fine this is a great camera solve can we do better we can do better so I'm gonna show you kind of the final trick up your sleeve up my sleeve up our sleeves once I teach you that will make our camera solve much slower and again very tedious process but totally worth it because there you're not gonna get any kind of drifting when you put a 3d object and you're seeing the higher the pixel layer essentially that's not very accurately but in some sense representing how much you're gonna be drifting right if you put a cube in your scene how much does it look like it's there versus how much does it look like it's sliding on the floor little because everything isn't matched tedious process totally necessary okay so like we talked about we have where is it we already have in phone abled so if we when we look at our trackers we again know what the average error of each of these are and again now each of these are now smaller than what we had before initially some of these were like 1.4 but because of all our refining now we're down to like point four eight and stuff like this but again just like last time you're gonna notice that some trackers are worse than others this tracker right here has an average error of 0.1 72 this is very very good whereas something like what is the worst offender here is it point four it's point four eight three I think so this is by far the tracker that is contributing the highest amount of error to our average right again solve error accounts for all of these so one weak link can ruin the whole batch so what we want to do is take the influence of this and bring it down so it's still accounted for right it's still a tracker here but it's not contributing as much to this weighted average the same way that in stabilization we could have it stabilized to multiple trackers with different influences that's the same thing we're gonna do here so with this tracker selected you're gonna notice that like stab weight which is what we talked about before this is the 2d influence we have weight which is the 3d influence you guessed it so if we bring this down something like 0.5 we're saying that we want this to have half as much influence as everything else here so let me ask you what do you expect is gonna happen when we solve again when we took the thing with the highest error and say and said yeah you know what don't contribute as much most likely again I think it's guaranteed most likely it's gonna drop the error so point three six to four we're not refining anything we don't have keyframe enabled so we're just taking this and only changing this one thing and solving again 0.33 so initially 0.36 now we're down to 0.33 and it basically recalculated the error on all of these but generally made all of them slightly lower so now this is the worst offender you want to bring this down maybe halfway ok now we're down to 0.3 one and you can play this game over and over and over again but you do not want to get to the point where all of these have weights of around zero because now yes you have gone a low solve error but is it even representative of your shot in the first place so you need to balance this idea of lowering your solver but staying true to the tracking information you got so that is something you want to consider but now we talked about this before we have this blue line which again is representing our camera saw which was kind of waving around much more wildly than it is now again notice that we knew look this far zoomed out to see the amplitude you know the top bottom the size of this red X graph and from this vantage point this looks very very flat again you're gonna get the most amount of oscillation when your trackers are moving the most you can see this has flattened out we have a much better camera solve and I think at this point you could go crazy again you could try to get this below point too and I'm sure you can if you put in enough work you can also add in new trackers and replace them for trackers that maybe aren't the greatest let me try to close blender blender blender let me just there we go again it's a nitpicking but I don't I don't like that you can't grab Windows that easily by the way where's my camera right now need to get it on the top right no there we go is that I will get used to this I'm sorry I'll add a little indicator so that I don't have to keep checking but okay so now we're happy with our solve error and we're gonna say that this is done there's no more refining that needs to happen here 0.3 1 3 8 is a great average reprojection solve area it's a very good result so what we want to do is now that we've basically calculated our camera path how do we use this inside the 3d viewport to you know add objects to our scene and do whatever with so let's talk about that so we've put in a lot of work and some people could say a lot of into getting this camera solved with a very low solve error of 0.3 1 3/8 which again is a very very good solve error so before we bring this into the 3d viewport again I just want to review what it is we actually did here so we had a lot of 2d trackers each with its own X&Y data and then we use blunders magic algorithms to basically take all those variations in all the trackers again we need a trackers for every frame at least we use blenders magic algorithms to extract 3d information from that and now we should have a camera path so we have a camera that is matching the motion of the real world camera that filmed this footage so basically the question is how do we take this camera and bring it into our 3d viewport so we can add things into the scene and it turns out that it's very very easy it's actually one button click easy so before we click the magic button I'll keep you waiting is we want to open up what we want to do is open up the 3d viewport and by default we have the default scene we have our cube our light our camera and ideally we want this camera right here to be the one that is moving we don't want to add anything else and again to do all this it's just a one button click solution and that button is set up tracking scene you click it and then you see that a bunch of things happened first of all we have this plane added and it's in a new collection and in fact it's in a new render layer if you go to and now we have something called background so we're gonna deal with that that's actually not what we want but it's also isolated to its own collection so that's one thing but then much more importantly we have this camera with these what look like our tracker markers you see we have this one over here in the distance which should be the one that is on this car so you might guess that these are basically the 3d representations corresponding to our trackers and that is the case and if we play with spacebar you can see that this camera is very slightly moving it's more obvious when we go into the camera view which also automatically has our background image image sequence added to it so if we play this it's going to be very slow mostly because we have this movie clip editor open so for now we can close it or probably gonna need it again though you can see we have something that kind of looks like it's tracked on kind of looks like this plane is hovering and drifting above the sidewalk which is why everything isn't lining up again we can see a lot of whoops we can see a lot of sliding if we focus on this point right here that is definitely a lot of sliding and of course the reason for that is these trackers which we know most of them are on the sidewalk except for a couple of them are supposed to be on the floor yet they're hovering above which is why the plane doesn't look like it's sticking on instead if we bring this down and all that it should look much better and this is an issue of what's called orientation in match moving so there's motion tracking and then match moving is you know the branch of motion tracking that usually deals with 3d stuff not always but now we're dealing with match moving we're dealing with the big boy operations here so again this is an issue of orientation and we can fix this manually by moving everything down or doing it automatically I'm gonna start off by first doing the manual approach just so you know what's going on here and then we're gonna switch everything up by doing the automatic approach which is much better and faster but in most cases so in the side view we want to bring some of these down or I guess everything down so it's basically on the floor and you can see that there's still some variation here that I don't wanna want everything to be on the floor that should be on the floor so now let's go to this side view we can also bring this to the side so our cube is centered and we also want to rotate until everything's pretty much flat and then bring this down again on the side view we can do more adjustment and again it's very hard to tell if we're doing something correct here because manual isn't the best way to go something we can do to help us is these empties they're not really empties if you look at the outliner here now you should be able to see it if I move my head this outliner doesn't have any empties it has our camera which has a constraint for camera solver but it doesn't actually have any empties so in this menu let's go back in this menu right here we should be able to take the size of these tracks and bring them down just so they're a bit smaller which makes our adjustment orientation adjustment much easier and I'm thinking we need to rotate this a bit more and bring it up and what we can also do if we go into the camera view it looks pretty much correct like the plane looks like it's on the same horizon as this sidewalk although everything looks pretty big and I'm pretty sure we should just be able to scale this and ideally we want to yeah we can just scale this which again everything is still left on the floor but we also need to I guess everything isn't left on the floor but we also need to adjust everything again so you can tell that this process is very time-consuming but now our plane is smaller and it actually looks like it's doing a fairly good job of staying where it's supposed to be with minimal drifting if we zoom in here you can see this is much better than what we had before still a bit of drifting because it's not perfect but not a big deal so that's manual orientation not what I recommend so let's open up the movie clip editor in a new window and have that set to our clip and you can see that in the solve panel we have a orientation menu just for this we have a whole bunch of operations dedicated to making sure that our floor is where it's supposed to be so the way we want to do this is select three trackers that are on the floor of course we need to define what that even means so we can have our floor be the sidewalk we can have it be this tiny dirt hill over here so we can pick what altitude the floor is that it can even be on the street but of course we're gonna pick the sidewalk so one shift-click - shift-click three trackers which define a triangle if you were to triangulate these and we want this polygon that would be made to be resting on the floor so with these three selected all you want to do is hit this floor operation which you can see moved our camera and if we go into our camera now everything is pretty much lined up it's not a perfect solution because you're perfectly relying on the 3d positions corresponding to these trackers and if they're not very accurate then you're not going to get a good or so we could also pick one two three which is a different selection since we didn't pick this one run another floor operation and you're gonna get something a bit different is it better well it depends on which trackers you chose really so that's one we think we can do and that's one axis of freedom taken away so now we have our plane on the floor but we can still slide around this point and we'll still be on the floor we can rotate it so we want to keep narrowing down these axes of freedom until we get what we want so we want to perfectly constrain this so the first thing I want to deal with is sliding this around where's the center of the plane well we can pick a tracker like this one say this is the origin and then all we have to do is set origin and now you see everything has slided slid there's a word for this it moved or translated around but of course the plane isn't what moved it's the camera so if we undo this and now we set origin again you can see the camera is what's moving giving the illusion that the plane is moving very important idea okay so now we have a plane on the floor we translated it to where we want but we can still rotate it and everything's still meeting our conditions so to rotate we can say we have an origin specified so we can say this origin relative sorry this tracker relative to the origin will define an axis so you can say that the line between this tracker and the origin can be something like the x axis or the y axis and I'm going to go first set y axis and you can see that now our Y axis is going in the exact same direction defined by these two trackers we can also make it the x axis which flips everything by 90 degrees so it's pretty much the same thing the last thing we can do and already we have this fine but let's just play around with this is we can choose how far away the cameras so basically how soon donatas by picking two trackers we can say what is the number of units between us between these two trackers so if we have these two trackers and we have a distance of one over here and we set scale you see that everything becomes very very big and really that just means the camera got very very close because we said that there's only one unit between these instead if we pick something like five which is many more units and set scale now our camera moved away and now everything is looking much smaller if we get ridiculous like 15 you can see everything is very very small let's go back to five and again you can do the same type of thing with orientation by just doing a wall operation which you can tell is basically the same thing but you know flipped up by 90 degrees now we're talking about a wall instead of a floor this is definitely a floor based scenario but you could totally see yourself tracking a wall and needing to use that kind of operation okay so now we have everything oriented correctly so I'm gonna close this and then in our and then in our camera properties we can take our background image and make it opaque by bringing up our alpha to one which is a hundred percent so now we can see everything and if we play this this is a very good a very good solve and it should be because we have a very low solve error we have our orientation done correctly so everything should look like it's digging on fair yet fairly accurately and if we wanted to we can do some last-minute adjustments I don't think we're gonna need it um that's looking fine what about this view this could be rotated a little so I'm just gonna rotate a tiny bit see how that looks in the camera yeah it still looks like everything's sticking on so you could spend a lot of time with this but let's say that we're done with this step and we want to start adding 3d objects to our scene of course we already have a plane and a cube but let's talk about exactly what's happening here so in the rendered view we are currently in evey so in the render tab let's save we want to switch over to cycles you can see some weird stuff is going on first of all you can see that this plane is gray and that's because it's a shadow catcher if we go into the properties here I believe it's in here and then in visibility you can see that shadow catcher is enabled if we disable it it's just black so this is a shadow catcher which means it's a transparent object but it still keeps the shadows on it hence the name shadow catcher if we go into the render tab and enable film and then transparent you can see that this is actually indeed transparent and I don't know if this is above with the release candidate because just like one version earlier you could actually see the background through this but just imagine that the background image is actually being shown through this shadow catcher so this could just be a temporary bug not a big deal so let's do eat this cube which is cut in half by the shadow catcher we can see the bottom by going under the shadow catcher so it's transparent doesn't show objects through it and supposedly holds shadows even though we can't see any but we'll deal with that so I'm gonna do eat this cube and pick something more interesting like a monkey it's probably the standard example and we just want to bring this and rotate it so it's sitting whoops let's go back to rendered view so it's sitting on top this on top of the shadow catcher sorry I'm stuttering so much it's been this is pretty much the first time I talked today I took a a one-day break from recording because my throat was dying but hopefully I'm doing okay here hopefully you really understand what we're getting at so now we have this monkey this Suzanne we can actually make it a bit smoother by adding a subsurf and yeah that should be good we'll apply it and then enable shade smooth so everything looks nice and smooth okay so we have our monkey but where is the shadow and this is the issue us talking about earlier where this ground point this ground plane is in a different collection in a different render layer which means that right now we're only viewing the foreground if we go to the background there's our shadow it's separated and this might be something that you want why is this set up by default when you hit set up tracking scene well when we go to the compositor again we already talked a lot about compositing you can see we have to render layers nodes nodes one for the background and then one for the foreground and the advantage of separating these is for example we could add a I don't know a hue/saturation value and put it here so now we'd only be affecting the color and value and saturation of the shadows so we've separated the two of these in this case that's actually not what we want but just just so you know there is a reason for this so we're gonna do eat this and the background node and let's talk about what we have so far so we have our movie clip which is really the image sequence being undistorted meaning we've already calculated the lens distortion and it's basically inverting that so now everything's nice and flat we don't have any of that curvature then it goes into the scale node which just makes sure that makes sure that the footage is taking up the whole frame so don't really worry about this node so we have footage it's being undistorted and then this alpha over node is no longer needed because we got rid of the background render layers we're gonna plug this into here so what's happening is v2 zoom out what's happening is we have this undistorted background as the area this undistorted image sequence as the background in this alpha over and then over that we put our foreground and that goes into our viewer which is what we're seeing in the composite and of course the reason we don't actually see our 3d objects is because we need a render but before we do that back to the layout so you can see that this shadow is only trapped in the background layer so we can actually delete this so now everything is in the foreground and we want to take this and put it in our foreground collection and now we have everything in the Senate the same render layers in the same collection and everything's good so we can actually click X to delete this collection so now everything should be working perfectly so when we hit f12 to render let's see what we get so we're getting this render from the camera's point of view which has our object our shadow catcher which is transparent except for the shadow and hopefully we'll be done rendering soon this is a good opportunity for me to get some water okay so it's gonna render through this and then it's gonna do another pass just to get the background there we go and you can see that it looks like our object with the correct orientation and everything is indeed in our scene and this is the really cool part so back in our compositing we can see what's what's happening here again we have our render layers that's everything in the 3d viewport is being put over the undistorted background image and the reason you don't want to keep the distortion is once the object gets around the edges it's gonna look like it's drifting because of this curvature so it is good to get rid of it and since these are isolated like I said we can do a hue/saturation value and mess with a bunch of stuff so we can bring up the value which will make everything brighter as you can see our monkey are Susanne trying to zoom in here our Susanne is getting brighter of course this is also affecting the shadow because we've put everything in the same render layer so I'm just gonna get rid of this put this back here and if we're happy with all this we can get ready to render this into an image sequence so I'm just gonna enable a timeline here just to make sure everything's tracked on correctly and we're gonna get a bit of glitchy movement but that's just because cycles is updating over and over and over again okay so this is looking good and before we render I just want to make sure you have a couple things set so there are no issues first of all in the render tab like we've always talked about over and over and over again make sure we're using standard mode because we want our background images Wonder processed through this compositor we don't want them to be turned into a filmic great essentially we want it to look the same way that it looked like when we filmed it some other things is make sure this transparence enabled or else your shadow catcher is gonna be a you know a solid object that still has the shadow but you can't see through it and then of course the obvious things like in the output tab we can set this to be 30 frames per second and I do want to render this quickly so I'm gonna make sure the samples are very low so our render samples can be something like 2 which is gonna look very bad but it's fine we use GPU and then we can render this as a movie again this is just export settings it's really not that important so ffmpeg video is gonna be a video file encoding we'll use an mp4 cuz that's standard it should be what you're watching this on YouTube with it's an mp4 with you know what low quality right we're trying to make sure this is rendering as fast as possible and once everything is setup ctrl f12 to render the animation and that is the speed I like so maybe we'll render something like 20 frames and hopefully we'll get a good long enough sample to tell if it's sticking on correctly and I'll take this opportunity to talk about the plan for the future of the series so this tutorial has already been going on for a while maybe around an hour or so so I don't think I'm gonna do the other two camera tracks in this in this part it's gonna take forever we have another shot with the wall and a floor I kind of reversed those a floor in a wall and we also have the tripod shot that is that I was talking about and again we only changed the process a little for this our orientation is a little bit different so I think I'm gonna make a shorter part for that's just gonna deal with doing these two examples so maybe like a twenty or thirty minute video and then beyond that we still need to talk about object tracking and deformation tracking and then I think we're done with everything we need to talk about with motion tracking and blender I know it's sad but okay so we have 50 something frames that is way more than we need and hopefully if this is not corrupted we should be able you know what you know what I didn't set an output path at least I did it's in this temp folder which means we need to find this temp folder to actually view our file do not make this mistake okay so see temp this is where everything gets so loaded by default okay so we can see our footage and of course we have this grain because we have to render samples so if we set this to repeat we can see that this is tracking on very nicely and we didn't even mess with they'll light that much but the lighting already looks like it matches pretty well you can see in these cars it's directly under which means the Sun is directly overhead you can also see this with the bikers in the background but if let's say we had a different scenario again this light is what affects the direction and softness of the shadow so we could just bring this up and bring it directly over and now our shadow is exactly under Susanne and if we go into the lighting options again make sure that you're in cycles to get some of the settings over here is the size we can bring it up which will give us a nice shot soft shadow in this case it's kind of hard to tell it's a bit soft maybe something like point five and you can match your shadow but I think I've been rambling a long time a really long time about camera tracking free just one shot but I wanted this to be very comprehensive so hopefully you learned a lot about camera tracking in this video there will be more motion tracking tutorials coming soon and if you enjoyed it I have a patreon you can support these high quality tutorials that's what I call them I do think they're pretty good you can support these tutorials over on that patreon if you feel like donating you get benefits like behind the scenes content and you know you basically learn things before everybody else I'm like oh I'm gonna make this video you guys should know I'll tell everybody else in two days so if you're interested in supporting these videos that's where you go but hopefully you enjoyed this video again I'm really sorry for all the stuttering that must have happened in the second half of this maybe I can edit some of that some of it out but we'll see can't edit that one out but whatever hopefully you guys enjoyed and I'll see you guys on the next one
Info
Channel: CGMatter
Views: 80,530
Rating: 4.9703798 out of 5
Keywords: blender, 2.8, motion, tracking, camera, matchmove, tutorial, 3d, solve, orientation, compositing, cgmatter, error
Id: jJ2zONKJ2Uk
Channel Id: undefined
Length: 67min 56sec (4076 seconds)
Published: Tue Jul 16 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.