Hi, I'm Ben Brownlee
for Boris FX and we're going to be taking
a look at getting started with the new
camera module in Mocha Pro. We're going to
start off easy and then we're going to dive deep into
some of the more advanced features. So here is our
first shot and it shouldn't offer us too many
problems when we're coming to do 3D tracking
on this. If I come up to the workspace drop
down menu in the toolbar you'll see we have a brand
new one that's called Camera Solve. So coming into here
gives us all the tools we need to work with the new camera solve module. And you'll see
that we have two new panels over on the right hand side. Now the camera solver
found in Mocha Pro 2024 and above is completely
different to the older camera solve that we had in previous versions. So old projects where
you've used the original camera solver will still have that
data when you open up the project in Mocha Pro 2024 and above, but the only thing
you'll be able to do with that data is to export it. You'll only have the new
solver parameters available to you. Now this camera solver
is powered by Boris FX SynthEyes and it's designed for a user friendly camera solving experience. So there are very few
controls that we have here. Now when it comes to camera solving, the biggest button is the most important. This is our solve button. So this is gonna run
an automatic solve for us. It's going to find the
points that are of interest. It's going to align the scene. It's gonna solve the camera. It's gonna do all of that in less time than it took for me to
actually say all of those things. And if we just play this through, you can see that we've got our points that are aligning to
the floor and sticking in there. And that's looking pretty good. We also have a ground plane stuck, well halfway up the
scene actually at the moment. And we also have an origin point, which is where the
three axes all come together. So let's come in and
just set our ground plane. All I'm gonna do is
I'm going to click and drag all of these trackers here. And I'm gonna shift, click and drag some of the trackers over there. In fact, let's just drag
and or select all of the ones that are on the ground plane there. And I can come over to my new 3D panels. So over on the top
right, we see the 3D objects, which lists all the objects we have. And we also have
the 3D object properties. Again, we'll come into
more details on those soon. But at the bottom,
we have this align area. So we can align the ground plane to the points that we have selected. Boop. And you'll see that we had a little bit of an adjustment there,
but not a lot of things changed. So if I have only one point selected, over in my align controls, I have a button pop
up that says make origin. So if I click on this one, we can see my three
axes now pop to that point as my origin point. This is where 000 is going to be. This is the center of our scene. So if I play this back now, you can see that that
ground plane is properly aligned. We've done our auto
track, we've aligned our scene. The last thing we're gonna do is we're just going
to export that camera data. So coming back into our solve data, I can choose what type of camera data that I want to take out, and hit these formats, and then I can just save them or copy them to the clipboard. And what you do with this camera data is definitely worth
having videos of their own because every single host application is gonna work with
that data slightly differently, but we'll talk
about those in other videos. And that's really the basics of it. If you just want
an auto track to come in, click on Solve, you align your scene, and you export the camera data. Really, really easy. What if we want to
go a little bit deeper? Well, let's clear our
solve down in the camera solve, and let's walk through
this process a little bit slower. So the first thing I'm gonna do, again, is just leave
everything at the default settings, and I'll click on the camera solve, and take a look at
the top right-hand corner when we do that. So it's gonna go
through a series of processes to work with this, and it's gonna say things
like blipping, peeling, solving. Now let's talk about those words. So when we're
breaking things down like this, what we're looking for is
we're looking for features first. So a feature is any sort
of point of interest in the shot that might be useful to track. So it could be something like this, these lights up at the top here, just this light shape here, or this little area down on the wall, or any of the other hundreds of points that we have
possible in this particular scene. Now these can either be auto features, so something that the
camera solver finds for us, or if we want to have
those features in a specific place, we can use Mocha's planar tracker to track in individual points. And we'll take a
look at that in the next shot. The process of creating
these features automatically starts with blipping. And blipping is a SynthEyes term, which means it looks at the shot and finds all of the
points of interest in them. When it's done that
across the entire range, it then tries to find
the paths of those blipped areas. It then takes those best
ones and turns those into trackers. In a process that's called peeling, we end up with 2D trackers. The next stage is to solve the camera. So by doing some quite clever maths, it turns those 2D trackers and finds the relationship between them, turning them into 3D trackers, which is what we actually see here. As I scrub through, we
can see these are 3D trackers, because if I come up
to my viewer choice up here, we can take a little
look at these in perspective, and we can start to move around the scene and see these in 3D. If I play this through, we can see the camera
is also tracked inside there. After we've done the
solve, we have one more process, which is a cleanup phase, where it takes away bad trackers, trackers that have
a too high an error rate and sort of cleans up those areas there to give us our nice track tinsing. So how can we tell
whether this is a good solve or not? Well, probably the
easiest way to look at it is to come down to this bar down here. This is our H-Pix graph, and this is showing us the average error of the currently solved 3D trackers. And H-Pix stands for horizontal pixels. And what this really is, it's the difference
between the position of the 2D trackers and the unsolved
trackers and the 3D points. So what happens when
these are actually solved in? So any sort of drift
between where the 2D tracker was in the start and where the 3D point is, that is our H-Pix error here. And again, this is the average error. So we could have
individual feature points here, which have a very high error, so then they're really
not sat in the scene very well. As a rule of thumb, anything of one pixel and
below is considered a good track. Here we're at 1.04. It's pretty good, it's pretty good. And I'd probably be happy to export this. But there are other
parameters that we can change to help this out as well. And let's start back in
the camera solve parameters. The first one is the focal length. So this is the 35 millimeter equivalent of what the focal length of the lens that was used in this
shot is actually going to be. The focal length isn't
the distance between the camera and the area that's actually in focus. When we're talking
about focal length here, we're actually talking
about the focal length of the lens. So here, this is a 28 millimeter lens. So on a full frame camera, this is gonna be 28
millimeters between the lens and the focal point. And if I take a bigger lens now, this is a 300 millimeter lens, which means we have a much larger gap between the focal point of
the lens and the sensor itself. This focal length
parameter is actually pretty important, especially when we
start to take this data into other host
applications and start to work with it. Now, the way of thinking about it is that shorter focal lengths have a larger amount
of parallax within the image. So the speed at which
the foreground points move is going to be
greater, relatively speaking, than the ones in the
back than with a longer lens. So with a longer lens,
you'll see very, very little movement or very, very little parallax movement within the equivalent sort of shot. The other important thing to think about with this focal length
is it is an equivalent length of what it would
look like on a 35 mil camera. So like taking a look here, like it doesn't necessarily have to do with the actual
physical size of the lens. Like this is a 28 mil lens. This is also a 28 millimeter lens. This also has a 28
millimeter equivalent lens. So, you know, just
looking at the size of the lens itself isn't necessarily a great way of judging what value should be put in focal length. We can see here that the
sole value of this particular camera was 13.3714 millimeters,
which is quite a wide angle lens, which actually makes sense here. And that's why I would
suggest that we keep this set to unknown,
especially in the first instances. Because even if we do
know the actual lens that was used, we also need to do
calculations based off of the film back of the camera, like the sensor size, and all of that other sort of fun stuff that you might not have to hand. Also, any sort of
optical flaws in a particular lens would also send
that value off a little bit. Like this is rated at 28 millimeters, but maybe it's 28
millimeters 0.14 or 0.014, or something like that. You know, the actual
physical process of creating a lens isn't always
perfect, except from this lens, which is perfect. I love it. The other thing to think
about is if you're using a zoom lens and if you're zooming within that shot, that focal length
is going to be changing. If you do do that, you need to make sure you click on zoom lens down there. So if the focal length
changes throughout the shot, have that selected. Just because you're using a zoom lens doesn't mean you
have to have that selected. It's only if you
are zooming within the shot. So with that being said, what can we do to
improve the error rate here? Well, one of the things
is coming into our features area and maybe making some
changes within the features. And we'll just stick
with auto features to begin with. We can also look at
upping the minimum number of trackers per frame. So at the moment, this is set to 12. So if I turn this up,
it means we have guaranteed more trackers within the frame. And we're probably going to be reducing the number of short
trackers that we have to work with. So it's going to give us a better result, especially on longer shots. The other thing we
can do is we can crank up the maximum track account. In fact, we can crank
this up to a ridiculous level and I would never have it set this high. Now the consequence of
having this maximum track account set up too high is that yes, maybe we will have
more features at the end of it, but are they going
to be the right features? And we're just
going to be giving ourselves a lot more feature points to manage over on the right-hand side. And it was also going
to blow up the amount of time that it takes to calculate the solve. So the more features we have here, the longer your
solving times are going to be. So the default is 120. Actually, we'll
crank this up really high. I would never crank this up as high here. I'm going to put this to like 1400, click on solve and we'll
see what difference that makes. One of the biggest
consequences I can hear so far is that the
graphics card fan is spinning up. We're definitely
making it do a little bit of work. And once that's solved, we can take a look at our average error. That's now down to
0.98, looking pretty good. Let's take a quick look
in our perspective view here. We definitely get a different view of what's happening within our scene. We can see the curvature of the building. If I open up my feature points, our feature point
list is now rather extreme. At the end of it, we
have 1,339 valid vertices. Whew, that's a lot to worry about. Huge amounts of trackers
don't always mean more accuracy. As part of an auto solve, the minimum number of
trackers is probably more important. So when we're in that cleanup phase, few of those trackers are
going to be automatically culled, leaving you probably on
average with some worse tracking. Let's take this down to 320. That should be perfectly fine. The other thing to
worry about is the blip size here. So what is a good blip size? Well, this depends
on a number of factors. The first one is
probably the resolution of your shot. Like the blip size is measured in pixels. So how big are those features that you're going to
be looking for in the shot? If we have a high-res
shot with lots of detail in it, like we do here, this is a UltraHD shot. We have lots of nice, fine details. A small blip size of seven, seven pixels. That's going to pick up a lot of details that's going to be nice for us. If all of this scene was out of focus, that blip size at seven, probably not going to pick up anything really sort of interesting for us. So we'd have to bring
our blip size up substantially. So let's maybe bring it up to 20. That's quite a high small blip size. And then our big blip size, that's the size of the larger features. And rule of thumb,
have the big blip size, twice the size of the small blip size, just to get a bit of
variety between those two. So let's solve that again. Going to click the big solve button. This should solve quite quickly now because we have a
smaller maximum track account, but you can see my
average error has shot back up again over that one pixel value. It's not too far
over the one pixel value, but it's not great. So let's take my
small blip size down again. Take that back down to eight. I'll take my big blip size down, 16. Let's run that solve one more time, not change anything else. Remember the
average error before was 1.12. We now have an average error of 0.999. Goodness me, that's pretty good. The other thing you'll notice is that every time we run a new solve, we have to do a new set of alignment. It's going to try to auto
align to a ground plane again. So let's do that
same things we had previously. Let's select our align to ground. We'll select our make origin. Happy with that. Have a little look
through in perspective mode again. Now, navigating
perspective mode is fairly straightforward and it's probably a good
idea to have a three button mouse, just like working
with any 3D application. By using the scroll wheel, we can zoom in and out of our viewer. If we click and
drag with the middle mouse, we can pan around in the viewer. If we hold down alt and left click, we can rotate around and our viewer here, and we are rotating
around the origin point. And if I have alt or option held down and use the right mouse button, I can dolly in and out of my scene. Very nice. But what if we want to go even further? Maybe we've got a
slightly more difficult shot. We want to add in some garbage masks. We want to use Mocha's planer tracker, maybe even the
power mesh to help to drive some of this tracking data. But that we're going
to save for the next part in our getting started
with the camera solve module in Mocha Pro. My name is Ben Brownlee for Boris FX, and I'll see you
match moving some more shots in the next video. If you have any
questions about the new camera solver, please leave them in the comments below. If you want to go deeper as well, we do have a Boris FX Discord channel and Boris FX forum,
links in the description below. Of course, if you
don't have Mocha Pro already, check out a free trial at borisfx.com.