Hello Creative Shrimps! Gleb Alexandrov here. welcome
to yet another tutorial from the Photogrammetry Course. i'm pretty excited, because in this video
we'll explore the photogrammetry workflow which is 100% open source and cross-platform (!). before
we begin, that's the software that we're going to use, so feel free to download it: Darktable for
pre-processing the raw files, Meshroom as our main photogrammetry tool for this tutorial and Blender
obviously as a multi-purpose 3d editing tool for cleaning up the mesh and baking textures. a quick
disclaimer though: it might be not the most optimal photo scanning workflow, there are tools and
applications that are simply faster than Meshroom and, say, more convenient than Blender when it
comes to baking, but it's definitely the most accessible photogrammetry workflow at the moment.
so let's get started. here is a brief introduction to what the capturing process looked like. so
we went to this abandoned post-soviet hangar, found this amazing cellophane and pretty
much shoot it from all possible angles while following the best practices of photogrammetry,
mainly keeping the iso at its lowest number, setting the f-stop to 8 to make sure everything
is in focus and then using tripod to calibrate the shutter speed as we like it. when shooting on
tripod, the shutter speed can be any number we like no shaky hands involved. but to stay on a safe
side we set the timer to 2 seconds as well to reduce the camera shake on pressing the button.
that is pretty much it for the photo session, we have covered these settings extensively in the
first chapter of the course. oh... and we also took one photo of our X-rite Color Checker gray card
to be able to calibrate the white balance in post. the whole scan took 107 photos on Canon 80d with
Sigma 17 to 55 millimeter lens. the source photos as well as the .blend files are available
for download, so if you want to follow along with us, consider downloading these photos. before
starting the photogrammetry process, let's actually process the raw sources in Darktable, a free and
open source software for developing the raw files. so right away i'm going to open the folder with
all the photos that we have taken during the scanning session and then i'm simply dragging
it into the central window of DarkTable. if you don't have the panel on the left, you can
hit the triangle to open it up and there we can check the metadata or the image information, there
is a lot of interesting data in there. what were the camera settings, for example f-stop was set
to 8 apparently, exposure at 1 half of a second the iso is just 100 which is awesome and it never
hurts to double check if it's the raw file indeed, which it is, we can tell it by the cr2 format. on
the right we should also have the open panel with history stack and export settings. right now we are
in the light table mode in which we simply preview all of our images and we edit them in darkroom,
you can access it by double clicking on any image, let it be the image with the gray card that
will serve as a reference. down in the modules tab i will activate all modules just for a second to
showcase something. the thing we should watch for when developing the raw files in darktable
is whether the base curve has been disabled, occasionally you may discover that some kind
of preset has been applied, these curves are similar to the in-camera processing that is
applied automatically when you shoot in jpegs and that's something that we should avoid when
shooting in raw, so here goes the reset button. we should keep the contrast curve linear or
disable it all together. all right now i'm visiting the modules drop down menu and selecting
the scene referred workflow, that's the preset that contains all the modules that you will need in
this tutorial and next i'm going to check the tabs at the top of the shelf to make sure that
pretty much all the modules have been disabled and it seems like it is the case, except the filmic
rgb that should be also turned off, because it also implements a kind of a curve and we should
be suspicious about any kind of curves when developing our raw data for photogrammetry.
in other words, we are trying to stick to the sensor values as close as we can and avoid
the transformation that are meant for display. so to calibrate the colors and make them closer to
the reference in the source photo, we should adjust the white balance first, the temperature slider.
thankfully we have the gray card in our scene so we can set the custom white balance with
the help of such card. if you don't have a card though, don't despair, try to find the area
in the photo that is supposed to be as close to neutral gray as possible and select it or
it's always possible to eyeball it a little bit. so here we have the scene illuminant that is
approximately 6k Kelvins. what we can do next is adjust the exposure in a similar way, because
we have the 18 percent gray card and we can use the color picker too in dark table to read
the LAB values of these pixels right here... and this value according to the specification
of the X-rite Color Checker Passport should read as 50 in the LAB terms. to double down we know it
for a fact that if we want to align these values with the values of the color checker, which
is our reference, it should read 50. so what i'm doing here is carefully nudging the exposure
value to the right and then clicking on the color checker and checking again. increasing the exposure
a bit more... 44... close enough to 50. now i'm going to enable the tone equalizer module to try to reduce
the influence of the light sources on the scene. we can do it in two ways: we can of course adjust
the sliders like the 0 ev slider here or we can hover the mouse cursor over some area of the
image and use mouse wheel to adjust the sliders. something like that. it's not a physically
correct operation and it shouldn't replace a proper delighting but that's a decent starting
point for pre-processing photos for photogrammetry. i usually follow it up by switching the
highlight reconstruction to reconstruct color, that's an additional measure to preserve some
highlights that otherwise will get clipped. alright guessing back to lighttable where we have
all our photos arranged, so we have adjusted this particular image, now down in the history stack i'm
pressing selective copy, making sure that the white balance is included, that's important and pressing
ok. now ctrl a to select all images at once, in the history stack selective paste and happily
confirm the operation. the adjustments made to the photo with the color checker should
now propagate to the rest of the sources. it looks good to me i think, now we should be ready
to export this batch of images out of darktable, so i'm heading over to the export tab, pressing the
browse icon, i've created two folders in advance: one for geometry and the other one for texture
and they are called like that: meshroom_geometry meshroom_texture. i'll explain why in just
a moment. so let's enter the texture folder, set it as the output destination and now
we should check two things, the first one is the output resolution and i'm limiting it to 2500
pixels wide. why do we do it though? why not simply use the full resolution? firstly to reduce the
processing times, secondly to reduce the file size and finally and probably most importantly we found
out that pre-scaling the footage before feeding it into Meshroom leads to better results than using
the higher down scale factors within Meshroom. and then we should check the file format, that
should be 8-bit tiff and hit export then. after this batch of images meant for texturing have
been exported, let's do another batch for geometry. disclaimer: this step is not absolutely necessary,
Meshroom will work just fine with with just one set that we have prepared already, that being
said it's very useful to know how to prepare the split input for geometry and textures in
Meshroom so that's what we're gonna do. double click on any of the sources to enter the darkroom
mode. this time i will prepare the more contrasted versions of the image, so tone equalizer set to off,
the local contrast can be set on instead, i will try to boost the overall sharpness and the amount
of fine detail in the images, in theory that should help Meshroom to reconstruct the better mesh, so
let us also enable the sharpen effect, crank up the amount all the way to its maximal value, not
that the whole process wouldn't work without all these sharpening shenanigans, but anyway it's
useful to explore this branching workflow of feeding two types of sources to Meshroom. so once
again clicking the lighttable to get back to our lighttable tab with all the previews, selectively
copying all the settings including white balance, ctrl a to expand the selection to all photos
and then selective paste all modules onto them. looks like the processing has been finished so
we just have to export it out to its separate folder. clicking on the browse icon, back to the
folder and now let's dump it into the Meshroom geometry folder. it is important that
the resolution or the size of the images line up with the previous folder,
so 2500 and pressing the export button. after pre-processing the raw files in
Darktable we should have two folders: the one with the images calibrated for texturing
based on the reference gray card, don't worry if it looks a little bit too flat and boring, it should
look like this, it is meant for albedo textures. and here's the second one with the files aimed at
reconstructing geometry. slightly sharpened with boosted contrast and so on. right so we did all the
pre-processing required for photogrammetry in Darktable and now we can actually move on to Meshroom
and start the photogrammetry process going. now on to reconstructing a fabulous 3d model
out of the processed photos in Meshroom! before we get started feel free to download
this free and open source photogrammetry software from alicevision.org. so the first
thing we need to do is import the photos that we processed in Darktable into Meshroom. in order
to do that i'm going to select the photos from the geometry folder and drag them onto the images
window. so once we load up our source photos into Meshroom we should see it over there. the scale
of the thumbnails can be changed if we wish. so these are the photos that we have processed
for geometry, we can click this icon to check their metadata and see if everything is all right. for
example it's worth checking the resolution to see if those files are the files that we think they
are... looks okay to me... and now we can press this enticing green button that says start!!.. actually no ;)
we should really use nodes one by one in the graph editor, it's much safer to approach Meshroom in a
modular way and activate these nodes one by one until we reach the final destination which is
the texturing node. that's going to be the last one. but we should be definitely proceeding
through this chain on a node by node basis. so i'm going to right click on the first node and
compute it. Meshroom prompts an error that we should probably save the file first, so let's heed
the advice. file, save as, name the file somehow, i won't be insisting on particular naming :) okay now
we can right click and compute it again and we can see that the progress bar became green,
it means that the node has been calculated. every node in Meshroom has the attributes that
can be changed and some of them can be seen only if the advanced attributes checkbox is ticked, but
at the same time this checkbox makes everything convoluted and the default settings are more than
enough in practically all cases, but at the same time there are some useful settings that can be
seen only with this checkbox set on, for example the force cpu extraction checkbox, which can be
switched off for a slight benefit and speed, but generally it isn't that important to see all
the advanced attributes. well okay, returning for a moment to describer density and describer quality, these two settings definitely have the most influence over the processing time and
the resulting quality of the feature extraction node, i will go with the flow or with their
default values, but feel free to experiment. the right click menu, compute, the progress bar on
the node has turned orange meaning it's processing... once it's finished it will turn green and that's
how we know that it's fully processed. now for example we know that every feature has been
processed. we can click on the display features icon in the image viewer to see those markers
or sift features. basically Meshroom detected a bunch of points for every photo and in the feature
matching node these points will be matched to then create the structure from motion. but first let's
compute the image matching node then proceed to feature matching, no need to touch the settings the
default values are just fine for for these nodes probably all the way until the depth map node
based on our experience, so right click compute. to track the progress we can head over to the log
window and we will see the chunks there, each chunk has to be processed until the node is calculated
and marked green. and here we can see the progress bar for each chunk as well. the chunks just like
the nodes are also marked green on finishing. so it took approximately seven minutes to calculate the
feature matching node... i'll be logging the progress, you know, just in case... then onto the structure from
motion node. as you can see the calculations become progressively longer and longer and this node
definitely will take a considerable chunk of time. 20 minutes to be exact. wow. it's quite typical for
photogrammetry and for Meshroom photogrammetry too of course. i feel there might be an opportunity
to use a wise quote here: "To lose patience is to lose the battle" by Gandhi or something like
that anyway. but what do we have here? it's our structure from motion that has just finished
processing! all right, we can see the generated result in the right hand panel, the visibility can
be toggled on and off by pressing the eye icon, we can read the number of points there and
the number of cameras that have been matched. 107. all cameras have been matched and that's
great. then in the top right corner of the ui we can change the scale of the points and the
scale of the cameras as well. i try to keep these values pretty low so the viewport doesn't look as
cluttered. all right by looking at the viewport we can tell that the orientation of the model is a
bit weird, what we can do is add the extra node after structure from motion. in order to do that
we need to create some room, so i'm gonna move it like this to the right node by node until we free
up some space over there so we can plug the node, right click on the empty space, go utils, sfm
transform. the sfm data should go into the input and then right click and remove this existing
connection and hook up the output sfm data file to sfm data input of the prepare dense scene node.
we can choose from different transformation methods, but let's try auto from landmarks,
hopefully that will fix the orientation issue. let's compute the node, double click it to open
it up, let's toggle off the visibility of the original structure from motion... and i think
something is wrong. let's try unticking the scale and translation checkboxes and just play
with rotation instead... okay, double click again... all right, awesome, that did the trick! i think
it improved the orientation of the point cloud, it's far from perfect though, i think it can be
adjusted furthermore. say, if we'd like to tweak the orientation of the point cloud manually we can
do it too with the help of the sfm transform node. so i'm gonna repeat a somewhat tedious
process of moving it node by node to the right and then i'll hook up the second sfm transform
node in between these two. something like that. the transformation method now should be set
to manual, that gives us a gizmo with scaling rotation and translation controls, so we can
place the the mesh or rather the point cloud manually somewhere in the scene. let's toggle
off the visibility of the original object and use the gizmo to carefully rotate and translate
the mesh until it rests on the grid like this. there are two gizmos now: one for navigating
the viewport and the second one for rotating the mesh. you can click on this hide
gizmo icon to hide it temporarily. optionally this alignment step can be
skipped altogether, because we will be tweaking the orientation of the resulting
model in Blender anyway, but i'm too pedantic to skip it. i really want to see the correctly
oriented mesh right here right now in Meshroom. alright after doing the quick alignment we can
proceed to calculating the rest of the chain. preparing the dense scene for
depth maps won't take long, just a few seconds it max. but then
it's time for the depth map calculation, that beast can *definitely* take an insane
amount of time especially if the downscale is left at one meaning the original resolution,
in other words if the downscale is set to 1 mushroom divides the source photo resolution
by 1, meaning it uses the originals. if the down scale was set to 2 meshrim would divide the
resolution of the source photos by 2 before processing the depth maps, if the factor becomes
4 mushroom downscales it furthermore. now it is a rhyme :) let us go with the downscale of 2 though.
taking into consideration the already halved resolution in Darktable though. so by taking that
into account it's like 4x optimization already. in addition to that there is a
whole bunch of advanced parameters, which i really don't recommend touching, the
developers of Meshroom don't recommend tweaking it as well, so let's listen to them, it's simply
not worth it. alright calculating the depth map now there should be enough time for a cup of
coffee or green tea if you're up for it, in order to predict how long it will
take we can once again check the log. each chunk that has been processed shows the time
that it took, so to approximate the remaining time in a kind of a rough way, we can multiply the time
as seen in the finished chunks by the total number of chunks. for example we can tell that the
depth map chunk number 12 took approximately 19 seconds to complete and then let's say we have
35 chunks in total, that means 19 multiplied by 35... what is it?.. 660 seconds or 11 minutes. it
is a very rough ballpark though, because chunks can take different time to calculate,
but that gives us some figure to work with. allow me to fast track the result though. anyway, it
took about 11 minutes indeed! i think it's actually pretty fast because we have downscaled the sources
in Darktable, remember? that is so totally worth it, downscaling. if you feel like waiting a little
bit longer and potentially getting more detailed result in the end then just feel free to
bump the downscale value all the way to 1. it is a little bit scary though. anyway i'll
proceed to calculating the depth map filter node which is really quick to calculate as opposed
to the depth map node. alright so with the depth map calculated and filtered it is a high time for
materializing the 3d model itself. the colorize output can be turned on if you want to check the
vertex colors, but i don't see a real need for it right now, then we should probably consider
switching on the custom bounding box within the meshing settings, that would really help to cut
the garbage and optimize the reconstruction speed. right away we see no change after ticking
this checkbox, that's because we need to double click on the meshing node, that's important. and
then we should see this new model in the scene tab and we can hide the bounding box if needed
by clicking on this icon. all right so we can adjust the reconstruction region by manipulating
this gizmo, but it's somewhat obstructed by this second display gizmo. thankfully we can disable
it by clicking on the display trackball icon. and now we can move rotate and scale this thing
so the bounding box encapsulates our mesh in the most optimal way. like do we really need to render
everything or just this clothy object? maybe we can save some resources by constraining it like
that. sometimes it's a life-saving optimization if for example you have scanned the entire city
area and you want to render just this building. okay with the bounding box ready we can
go ahead with the mesh reconstruction. that is pretty exciting. we will see the fully
fledged photo scan, the fully fledged 3d object for the first time and we will tell if it's good
or not (of course i already know that but anyway ;) i will disable the bounding box, re-enable the
trackball in the viewport and what can i tell? except these floaters at the bottom of the model
everything else looks great to me! actually better than i expected. this is our photoscan and it is
actually good, i'm really happy with it. now we can either proceed to the very end of the chain and do
texturing or we can do the cleanup in Blender and then do texturing instead and that's probably
what i'm gonna do, because i want to change a few things with the mesh. so after creating the
publish node i'm connecting the mesh output into the input files and then the output folder needs
to be selected. we can do it like this, at least in Windows. after opening the folder right click
on its name, copy address as text and then paste the text over there in the output folder field.
and lastly right click on the node and compute. after that an obj file that contains our
high poly mesh should appear in that folder. in just a moment we will bring it into Blender for
some cleanup. in this part of the video we're gonna mesh doctor our 3d model in Blender and then bring
it back to Meshroom for some texturing. so fire up Blender, go file, import, choose obj, navigate
to the folder with the model and hit import. so that is our mesh, but before doing anything
with it i'm going to right click in the bottom of the user interface and enable scene statistics,
so we can see that it's approximately 1 million vertices and 2 million faces. and... woo... it seems
that the z-axis was flipped. we thought that we sorted out the orientation, but... not as fast. it
isn't recommended to change the mesh orientation itself, because it has to be re-exported back to
Meshroom, so i created the empty object instead i'm going to parent our mesh to this empty object
select both objects with the empty being the last one, right click, parent, object. now on rotating
the empty objects the mesh will follow all the changes in transform. and in addition to that
our empty will record the changes that we do, so we can roll it back just before exporting it
out, so then it all aligns perfectly with the Meshroom coordinates. okay next i will add the cylinder
that will serve as the boolean cutout object, let's crank up the vertices count to make... uh...
the cutout a little bit smoother. we can click the x-ray icon to see through things and
s an x to scale the cylinder horizontally on the x-axis and then i'm going to select the
photoscanned asset, come over to the modifiers tab and choose the boolean modifier. right away i will
switch it over to the fast mode, because, well... it's fast! and we are dealing with considerable
number of polygons and the mode should be switched to intersect instead of difference, then
i'm going to pick the cylinder as the operand and that constrains the asset to the shape of the
cylinder. now we just have to apply the modifier and lastly we can select the boolean object, press
x and get rid of it. and toggle off the x-ray mode. alright so technically what we did is
just imported this photoscan into Blender, modified it, we could have edited the mesh itself
or applied any kind of modifier or adjusted it via sculpting for example, it doesn't matter much at
this stage, what we did is made some change to the mesh, but because we have the empty object around
we can revert the transform of the object and get it back to Reality Capture for some texture
projection for example. alright, so far so good. should we show some sculpting techniques though
that may come in handy, what do you think? all right i think we can take this risk and stretch this
section a bit more to show how the mesh can be cleaned up via sculpting, just in principle, you
know. by the way now i will save this .blend file to load it up once we mess it with sculpting.
just keep in mind that this sculpting section is not supposed to be followed along or you can
obviously do it if you wish, but be sure to save the file, so we can get back to it. often if the
photo scan is good enough cleaning it up means denoising it. the high frequency perturbations
in the geometry is the scourge of photo scans and the Blender sculpting mode comes to the rescue.
in the left tool shelf we can find all the brushes and if the mesh is dense enough we don't need
dynamic topology and i think 1.7 million polygons is already dense enough. and now what brushes
could be useful for denoising the geometry? the flatten brush comes to mind instantly. it's
my favorite brush for this kind of task. it works in kind of a simple way, it really does flatten
the geometry, it can't be put better than. the the strength can be controlled at the top of
the user interface and that's especially useful if you do it with mouse. if you use tablet, the
strength can be controlled by the touch intensity and needless to say that makes the whole process
much more intuitive and just generally faster. and then you just gently massage the mesh along
its curves and the flatten brush is intelligent enough to not wipe out all digitalization in
those places. that is my number one brush when it comes to manual mesh denoising, it's really
really good. then we have the smooth brush that can alternatively be invoked with the help of
the shift hotkey with any kind of brush. it is even more nuclear than the flatten brush so you
have to control the strength if you don't want to transform everything into marshmallow. but in small
doses that's also a nice medicine for the noise. the third brush which we can place roughly
in this denoising section is the scrape brush, actually the settings for all the
brushes can be found in this menu, say, if you want to adjust the radius the
falloff, the strength, the direction, the texture whatever else, it can be done from from
this menu. you can explore it if you wish. so even though this grey brush works
kind of similar to the flatten brush in terms of polishing the surface, it has
the very strong hard surface connotations. imagine having to clean the photoscan
of some mechanically engineered surface, something along these lines. it's a fairly
unique brush and it fulfills this unique role. it's very easy to overdo it though, so i
would be careful with using it. again maybe in cleaning up the mechanical surfaces it has its
place, but it's a little bit, you know, freestyle. so the flatten brush is super useful
for reducing the noise, the smooth brush, then probably the scrape brush from time
to time. if the photo scanned mesh has some really cursed spot that has to be filled with
concrete, then the clay brush may come in handy. it literally builds up the virtual material
in crevices, so we can fill the entire caverns of chaos with this clay and sometimes it's
the last resort. it's the last thing that we can do to clean up the photoscanned mesh. now loading up our
backup as promised... uh... what i recommend to do with such meshes is just give it a slight treatment of
the flatten brush, by the way the tool shelf with the brushes can be toggled with the help of the
t shortcut. so what i'll do here is apply a small and fairly insignificant layer of flattening with
a flatten brush obviously, i will fast track this to not bore you to death. at this point i want
to encourage you to try the sculpting brushes, the flattening brush specifically to see what you
can squeeze out of this mesh in terms of quality. and once you play it with the baking tools we
can export this cleaned up and sculpted mesh back to Meshroom for some texture reprojection.
all right so let's imagine that we have done everything that we wanted to do with this mesh,
we trimmed, it we sculpted it out ever so slightly. what we need to do now is open the
right tool shelf with the transform tab and reset the rotation of the empty. as we remember
all the changes in orientation have been recorded in there, so let's input 0 in the x rotation field
and that will bring it back to its original value. if we did everything right, after exporting
this mesh out back to Meshroom it should align perfectly with the previous parameters.
so i'm using the obj export once more, creating a separate folder for it called high poly
tweaked, limit the obj exports to selection only, the other properties should be left at
their default values within the geometry tab and i think we should be good to go, maybe after
renaming it. highpoly_tweaked. after successfully exporting it out we can press ctrl z a few times
to undo a few steps and bring back the rotated version of the empty. chances are we may want to
return to this file later on and fix something. after our mesh has been cleaned up in Blender,
it's time to return back home into Meshroom and do some more texturing. we don't need the
publish node anymore so i'm going to hit delete. now we have mesh filtering
and texturing nodes to do. let's push them over to the right
like this to free up some space here, then i'm going to right click, go mesh processing
and add the mesh decimate node. this node will be our workaround to actually load the obj mesh
from Blender, otherwise we'll simply get an error that i'll show in a moment. so i'm removing
the connection that was previously going into the mesh input of mesh filtering and reconnecting
the output of mesh decimate instead. next we're going to define the input of our freshly created
obj file here so i'm going to drag it over there, let's also set the simplification factor
to something insignificant like 0.99, just for the purpose of reworking the vertices
of the mesh basically. right click, compute, as i've said we need to do it like this because for
some reason mesh filtering node doesn't accept our mesh, i don't know why really, it could
be a bug. actually let's try it, out i'm gonna paste the path to the obj file into the mesh
filtering mesh input and try to calculate it. and boom! the progress bar turned terrifying red, so
that's why we needed to pre-process the mesh via the mesh decimate node first, so i'm setting the
smoothing iteration to 1 in mesh filtering for some extra polish and now it should be perfectly
fine. i think it could be a bug of how Blender exports the obj files or at the same time it could
be a bug with how Meshroom interprets them. but anyway now we know the workaround. double clicking
on the mesh filtering node opens it up in the right panel, i think it is worth hiding the rest
of the meshes to get a clear view on our new one, it's ready for texturing i think. before feeding in
the second set of inputs specifically designed for texturing let's try to process this node as is to
show something, namely to show how the texture file type set to .tiff will look like and generally
to show how the overly sharpened file with cranked up contrast will look like. hint: not
great, not terrible, but you will definitely see the color management issue related to tiffs.
so double click on texturing to preview in the viewport and if for some reason do you don't
see textures, it can be switched on in this menu. so as you can see the diffuse texture turned
out to be significantly visually darker than our input photos, in addition to weird contrast
and sharpening that we did for better processing of geometry. that is related to how Meshroom
does color management, specifically it used an .exr format under the hood for pre-processing the
sources and now after spotting .tiff as a texture file type within texturing node it basically
converted everything to srgb color space. but what would be the safer workaround to avoid any
confusion with color profiles and stuff like that? we're going to sort it out in a moment for
now let's right click on the texturing and duplicate this node. let's sort out two issues
at once: first of all let's connect the inputs designed for texturing into this node and secondly
let's untangle the color management puzzle. in order to branch texturing like this we would
want to have two copies of prepare dense scene node. so what can we do is duplicate this node and have
a look at its settings, it should have the images folders drop down menu where we can hit the plus
icon and define the path to the folder with the sources meant for texturing. as a quick reminder
we had two folders: one for geometry, the second one for textures. the one for generating diffuse and
later on albedo texture was slightly flat looking and as if delighted already. this one. let me copy
the address as text, alt tab into Meshroom and paste it over here. next it would be brilliant if
we connected the images folder output of this node all the way to the corresponding input of
our second texturing node if that makes sense. to cut the connection i'm going to
right click on it and choose remove. i hope the user interface is
big enough in order to see it. anyway now we need to connect the images folder
input and output like this. it's important to not mess the connection somehow and it
is hellishly easy to do that actually. we need to be extremely careful with nodes. as for
the settings, let's keep the texture side at 4k resolution, we can go slightly higher if we wish to
but i think 4k will be quite enough, the downscale at 1 meaning original resolution and the texture
file type to .exr instead of .tiff. lastly the unwrap method should be set to lscm otherwise Meshroom
will generate a whole bunch of textures, like 10 textures instead of one and i think we would like
to see the texture nicely packed into one file. it took approximately 10 minutes to generate the
texture which is not that bad, let's double click it to load up the preview in the viewport,
let's hide the stuff that is blocking the view and here it is, an amazing high poly mesh
with uv map and diffuse texture, ready to go! it took us approximately 47 minutes to fully
calculate this photoscan with texture in Meshroom and actually i'm really satisfied with the quality.
maybe not so much with the processing time though, i wonder what do you guys think? in my opinion
it's pretty good and usable as an asset especially considering that's the free and open source
pipeline. that is by itself mind-blowing i think. as a finishing touch, let's take the model plus
texture out of Meshroom via the publish node. the mesh and the texture as well should go into the
input files, as usual the output folder has to be defined. let me create a separate one called high
poly textured and just drag it into that window. right click, go compute and there it should be
dumped into that folder that we have just defined. and indeed it is there, perfect! so that is
it for the first major part of the tutorial where we managed to generate a
pretty good photoscanned mesh using 100% free, open source and cross platform pipeline!
in the second part though we'll take this model and make it game ready by baking textures
onto the low poly version of this model. hello and welcome to the second part of
the free photogrammetry pipeline tutorial in which we explore the 100% open source, accessible
and cross-platform pipeline for creating amazing photorealistic 3d from photos. in the previous
video we used Darktable to develop our raw photos and Meshroom to generate 3d geometry, we also
utilized Blender as a general cleanup tool and in this video we'll use Blender as a
baking tool to reproject diffuse, normal and even displacement textures onto the low
poly version of the mesh to make it game ready. as a refresher, that's how our rose card
made a Meshroom looks imported in Blender. you can find this file in the project files
feel free to download it, so currently this mesh is fairly heavy it has 1.7 million triangles and
faces for that matter because it was triangulated and once again both the blend files and the obj
file used for this tutorial can be downloaded. we'll require these files in just a moment to be
able to follow along so it's recommended to get it. so i think we're ready to start
our Blender baking adventure. first we'll need to go over to our
import menu to get the obj file in, click import, it could take a few seconds if it's
heavy enough, but here we go, i'm going to rotate the model so it sits on top of the grid and to
aid the rotation i usually set the origin to geometry from the right click menu and then
it's the usual business of snapping the object into place by utilizing g, r and s hotkeys for
grabbing, rotating and scaling it and it's easier done in the orthographic projections, that can be
brought up by the help of the numpad keys. next we should probably smooth out the normals of the
object, so right click, shade smooth. actually it's fairly important to keep the normals averaged or
smoothed out before any kind of texture projection. to organize the scene a little bit and prepare it
for baking, i'm going to create a new collection and call it hp or high poly and while we are
at it, let's also rename the object accordingly. so high poly is the name. as you remember, we have
had the diffuse texture somewhere, let's hook it up to the principle bsdf shader. so here is the shader
editor and let's drag our .exr texture into it and plug it into the base
color socket of the material. by the way the color space is still set to linear
or it could be non-color, that's the same thing. now to be able to see this texture we need to
jump into the rendered mode in the viewport, maybe i'll also increase the strength of the environment
light. alright so the highpoly version is approximately thousand 1 700 000 triangles or faces, our next
mission objective is to derive the low poly version by simplifying this monster,
that's our plan. let's duplicate the mesh and crunch down the density of polygons
considerably. i'm renaming this object to low poly and putting it into its own
collection called lp. probably the easiest way to turn the high poly version into its low poly
counterpart is to utilize the decimate modifier. if you have a look at the modifier settings you
will see the face count displayed at the bottom, right now it reads a million and seven
hundred thousand polys. if we enable the wireframe we'll probably have a better taste or
better look at how we crunch down the polycount, so the ratio of 0.1 reduces the polycount by the
factor of 10 obviously and what i usually like to do is apply the modifier and then add the second
iteration of it, so modifiers, decimate. by doing it like that we make sure that it behaves in a more
responsive and fluent way generally and it's just easier to fine tune i guess. so reducing the ratio
down to 0.05 or even 0.03 produces 5k faces, that should be totally fine even for real-time
applications i think, removing the material just in case and before giving it the green light it's
useful to preview it in the solid shading mode, i see some mesh artifacts that bother me, so i'm
going to increase the ratio ever so slightly and apply the second iteration of the decimate
modifier. so this is going to be our optimized version of the model with the face count of
approximately 7 000 triangles. that should be an okay polygon for the modern game engine, kind of an
average one for for this type of an asset i think. so now it boils down to making it look as cool and
as high poly i guess as possible. you guessed it, via baking. there's just one more thing left to
do before actually baking the textures, it is creating the uv map, so in the second window i'm
opening up the uv editor, selecting all the faces in the edit mode with the help of the a shortcut,
then pressing u and selecting smart uv project as the fastest unwrap option, almost an automatic
one. the defaults should be fine... let's confirm. after seeing this huge island that represents
the ground i think we can select it by pressing l then s to scale it down a bit actually. and
then select everything by pressing a, go uv and let's pack it. now the uv space is used in
a slightly more efficient way that prioritizes the object and not the ground. even though
it's not perfect, it's much better this way. now when our low poly model has been successfully
unwrapped we can figuratively put it in the oven... for baking... in the oven... anyway, all right to keep
the workflow free and open source we're going to be using Blender for baking textures in this
part of the tutorial. it has a few minor quirks but it's totally doable. so first of all i'm going
to toggle off the visibility of our highpoly mesh in the outliner and enable the collection with
the lowpoly mesh instead. as a quick reminder it has been simplified to approximately 7000 faces
and the uv map has been already generated as well. so our cunning plan for baking the displacement
map from multires in Blender looks like this: first we need to capture the details from the
high poly mesh by adding the multires modifier and the shrink wrap modifier, that's our magic
combo ;) we'll subdivide our multi-res a few times to give it enough resolution, then the shrink wrap
modifier will capture the geometric details from our source high poly object and then eventually
these capture details will get pushed into the multi-res again so we can bake it as a height
map. don't worry if it doesn't make sense just yet we'll get into the weeds in just a moment
and we'll show the whole process. for now let's actually don't forget to select the high poly mesh
as the target for shrink wrap. to capture the full range of surface intricacies, we should probably
bump the resolution within the multires modifier so i'm going to click subdivide and that
will increase the resolution of the geometry and you can already see by its changing shape
that it inherited the detail from the high poly mesh, that's shrinkwrap doing its magic. so let's
subdivide it once more for even better effect and it's looking quite good already! let's check
how many polygons did the original model have... 1.7 million, so that is our target, that is the
reference number of polygons we should drive our low poly multires geometry two or rather
triangles, because the low poly model hasn't been triangulated yet. so currently it measures
approximately 81 000 faces and 163 000 triangles conversely, so ideally the number of triangles
after we subdivide it a couple more times should correspond to or be higher than the number of
polygons in the reference model. so let's click subdivide once again within the multires modifier
and let's see what the polycount will become. alright we got approximately 300 000 faces, not
nearly enough, so let's try our look at subdividing it once again. we should probably keep in mind
that each next level of subdivision will make it progressively harder for computers to calculate
but anyway if we take a look at the statistics we'll notice that currently our multi-arrest merge
is 2.6 million triangles so technically it's even more than 1.7 million triangles of the original
high poly mesh so that is more than enough geometry to capture all the detail. what about
the quality of the shrink wrap geometry though? we can definitely tell that even though the form
has been captured pretty, well there are noticeable artifacts. i believe that some of these pinching
issues can be tweaked by changing the wrap method to target normal project within the shrink
wrap modifier. but still it needs some more work. and i'm not entirely sure that a target
normal project was an improvement. you know, that happens in computer graphics
all the time, we think that we improve stuff while we actually make it worse, that's classic :) we can tell by these overly thin polygons
that something is way off with the mesh so let's probably roll it back indeed, the
nearest surface point then is our choice. now the pinching is gone. i think our
faithful shrink-wrap modifier did its best so we can apply it. so all the changes caused
by the shrink-wrap modifier get pushed into the multi-res, it's very useful because
that's what we are going to be baking, namely the difference between the final amount
of subdivision within sculpt and render which is set to four and the viewport subdivisions that
will eventually be set to zero that will be our difference between the low and high resolution
that will end up being rendered as the height map. so let me switch the viewport levels back
to 4. it's almost ready for baking, i think... but first we probably have to do something
with pinching. it's no good to leave it like this. i think sculpting comes to the
rescue once more. so ctrl tab, sculpt mode, t opens the left tool shelf with all the brushes now we need some kind of a smoothing solution in
addition to the smooth brush that can be invoked by pressing shift and the flattened brush we
have a very intriguing one called mesh filter we can click on inflate and
change the mode to relax instead. and now just click and drag and watch the whole
thing get progressively more and more relaxed until it is completely chilled out and there is
no visible pinching i guess. awesome, simply awesome! i think we have prepared the multi-res data
for baking now it's time to actually bake it. finally i'm going back over to the object
mode and i'm preparing mentally for the baking process, which is going to be a little bit
hardcore. probably the biggest weakness of Blender compared to other baking tools is its inability
to bake height maps from one model to another, fortunately we have this multi-res displacement
baking work around where we bake detalization from the higher sculpt levels to the lower but still
it's not quite there, it's still not as intuitive as some other baking applications. that being
said by prioritizing Blender as our baking tool we keep this workflow a) open source and free and
b) cross platform, that's a major advantage so let's roll with it. so what we're gonna do is bake three
maps: the displacement map from multires modifier, the normal map from the high poly version of
the model and the diffuse map or the base color map from the high poly version as well. let's
start with the trickiest one, the height map. the basis of how the multi-res displacement baking
works is that it takes the difference between the sculpt subdivisions and the viewport subdivisions
and it bakes this difference as a displacement map for the minimal subdivision level for the viewport. it is important even crucial to define what
is our minimal level viewport subdivisions that we are baking for in this case we
zeroed it out so that's our minimal level. okay as Eevee currently doesn't support any kind
of baking we should switch over to Cycles instead. it is very easy to miss this step and go: "where are
the baking settings?" so, Cycles. as for the samples we don't need that much, actually just eight
samples will be quite enough for the type of baking that we aiming for. now we have the bake
from multires checkbox ticked and the bake type set to displacement, that's practically all we
need for our multi-res heightmap baking workaround, our secret sauce of baking displacement maps in
Blender. now in the upper left corner of the ui we have the shader editor, i have created a bunch
of blank image texture nodes in advance, basically you can add those image nodes from the shift a
menu, i just did it in advance to save some time. so let us zoom into the first one that will be
our height map and press new to create a new image file. we can be brave and set the resolution
all the way to 4k knowing that the baking process will actually be super fast. let's call it
displacement_bake and it's also crucial to enable 32-bit float checkbox otherwise the
displacement will have this weird stepping effect. after a new texture has been added it can be
found in the image editor, for now it's just black. i think we're ready by now to press the bake
button with the displacement underscore bake image texture selected in the shader editor, that's
important. oops! no objects found to bake from, i think we simply forgot to do something, we forgot
to select our object in the viewport most likely. we can either click on it in the viewport or in
the outliner. we should tell that it has become an active selection indeed by seeing an orange
outline. that was easy to fix, right? now let's click bake again and it should go without a hitch.
luckily it's so fast to bake this type of texture we should see it almost immediately in the
image editor, let's just pack it or save it externally, however you like. otherwise we will
lose it on quitting Blender and that will be such a shame. packing textures instead of saving
textures has its own advantages and disadvantages, i did it to save some time and anyway their packed
textures can be extracted into the folder with a .blend file automatically, it'll even create a
nice folder named textures for you conveniently. but if you're gonna save images by going image,
save as, that's totally up to you, that's fine. all right what should we bake next? now the second texture will serve as
a normal map, so let's click new, call it normal_bake, the resolution is set
to 4k, let's disable 32-bit float now and hit ok. once again, it is important to select this
particular image texture that you're going to be baking to and with that done the bake type
can be changed to normals. so after selecting the normal_bake image and hitting the
bake button we should see the progress bar firing up at the bottom of the screen and if everything
goes right, get our normal map. oh the time flies! just a few seconds passed and we already
have the normal map generated. let's preview it out in the image editor to see if everything
seems to be correctly generated and personally i have no objections, i think everything looks
smooth. now let's see, what is still left to do? well, first of all before i forget, let's
set up a proper displacement chain, here i prepared an empty image texture
coupled with the displacement node in advance, let's disconnect it, move it to the left like this
let's use the height map or displacement that we have baked from multi-res modifier and it should
go into the height input of the displacement node. okay and this empty texture node will
become a holder for the base color texture. clicking on new and calling it just like that,
base color underscore bake. the base color texture doesn't have to be the 32-bit float image so
i'm leaving this checkbox empty and confirm. don't forget to pack the textures or save them
externally for that matter. this time we won't use the multires modifier as a source of
baking data, we will use our high poly mesh with the diffuse texture instead, so let's toggle
the visibility back on in the outliner, select the high poly, then control click on the low poly and
make sure that that's the last object selected now we need to come over to the bake settings
and change the bake type from normal to diffuse. here it is, we are interested in the base color
only, so both direct and indirect contributions should be turned off. then it's absolutely crucial
to get the selected two active checkbox ticked, that's why we have two objects and we are baking
from one to another, as for the cage we won't be adding a custom one, what we need to do though is
adjust the extrusion. it creates a ballooned out version of a mesh to cast the rays from, so it's a
safe bet to set it to a lower number like 0.01 or 0.02, something like that. let's check the high poly
model though first because i have an impression that we have lost the diffuse texture somewhere.
let's check it out. i'm disabling the visibility of the low poly, switching to material preview
so we can preview the diffuse texture and did it it looks blank, it looks like i managed to
lose the diffuse texture along the way somehow. let's inspect our high poly and... indeed. where
is the texture? what's going on? oO. that is awkward. but what i usually do in such cases is search
in the image editor... and actually here it is! it just has to be plugged back into the base color.
so i'm cli... i said i click and drag it like this and it goes into the base color. hopefully
that will bring things back to normal. so with this issue fixed we can give it another
try. i selected the high poly model first then let's enable the visibility of
the low poly one, shift click on it, then we can try to hide the high poly temporarily
to see what we are doing and it is super important to select the texture that is going to be used
for baking in the shader editor and with all the rituals fulfilled we can at last return
to the baking menu and hit the button again. whoops! it prompts an error,
no valid selected objects, i think we need to enable the visibility of
the high poly mesh as well, that should do the trick and then select both meshes with the
low poly being the last one and hit bake again. we can see the progress bar firing
up and that's the good news for us! well... what's going on? why the texture is black? we should probably disable multi-res modifier
and try that one more time. you see, it's a bad omen to say "the good news!" because there is no good
news in computer graphics, until you actually see the rendered result with your own eyes. baking
again... so after a few miserable attempts of reprojecting the base color from high poly to low
poly we succeeded and that's a good news folks! now we just have to save the image or rather pack it.
from now on it will be a part of the .blend file and now it's time to build the material. so base
color goes into the corresponding input of the principled bsdf shader and as you can see it looks
pretty lifeless and flat, that's because we need to reconnect the rest of the inputs. yes, our baked
normal map, i'm talking about you specifically. in order to work properly it needs the normal map
node from the shader editor, it can be found in the vector tab. and before we use this normal map it's
important to switch the color space to non-color or to linear otherwise the bump mapping will look
terribly off and totally crazy. so it goes into the color of the normal map and then it is silently
hooked up to the normal input of the material. all right that's looking much much better already.
to take it to the next level we can make use of the displacement map that we have generated.
as a quick refresher the displacement map setup in Cycles looks like that: we have the
baked height map which goes into the height input of the displacement node in the shader
editor and in turn this displacement node goes straight into the material displacement input.
then to finalize the micro polygon displacement setup in Blender we need to do a few things, first
of all we need to set up the subdivision modifier, here we go and enable the adaptive subdivision
check box within this modifier. if you don't see the adaptive subdivision checkbox right away
it means that the Cycles feature set should be switched over to experimental first and then
the checkbox with the new option should appear in there and it has to be ticked. but that's not
all, lastly we need to visit the material settings, scroll down all the way to the bottom of the
settings until we see the displacement drop down menu. there we need to choose displacement only and
hopefully after all these manipulations we would be able to enter the rendered mode in Cycles and
see the micro polygon displacement come into play. after some testing we found out that after baking
the modifier from multi-res data the scale of the displacement node should be set to something like
0.05. that practically makes it look very close to the high poly version of the mesh and so we think
that's pretty close. evidently the scale of one is way too strong displacement for the height
maps produced by a baking from multi-res so the scale of 0.05 seems to be the sweet spot
and in addition to that the mid level should be set to 0.5 but that's the default value for it
so we don't have to worry about it that much. it is worth comparing two meshes to make sure that
we nailed the displacement settings indeed, because it can be pretty tricky to do right. so here is a
quick comparison between the raw high poly scan at the bottom and the low poly version of it with
the cycles micro polygon displacement at the top. these two models look fairly similar to
me, i think we can write it as a success! if you feel like watching a few more comparisons
here's how it looks in Cycles with micro polygon displacement turned on... pretty awesome... and
here is the same mesh but rendered in Eevee this time without any displacement,
just a good old normal map applied and in my opinion that also looks pretty cool
and detailed and realistic, it is still based on the photoscanned asset and that's what gives
it its charm, obviously the realistic clothy stuff like that is notoriously hard to make in
3d so starting with a photoscan is a blessing. it is amazing how far we went with the open source
cross-platform photogrammetry pipeline, right? it is crazy to think that we can do all of this without
buying any kind of software and even though it's far from being the fastest workflow ever and if
you want the fastest workflow ever which still consists of 90% open source software check out
course, the link is in the description... but anyway that's super impressive that that we can set up
the full photogrammetry pipeline by using only open source and only cross-platform tools, that
makes this entire workflow super accessible pretty much to anyone, the only bottleneck i see currently
is the hardware issue rather than software issue, which is an Nvidia gpu is still required for most
of the operations in both Meshroom and Reality Capture if you're going to use Reality Capture
for the same thing. aside from that that's super accessible, really! thank you for watching, my name
is Gleb Alexandrov, you're watching Photogrammetry Course in which we explore various ways to
create amazing photorealistic 3d from photos. make sure to check the full course if you want to
learn more, the link is in the description and join our Creative Shrimp Discord Community where
we talk about all kinds of things including photogrammetry. see you there, drink more coffee
and we'll change the world of computer graphics!