Creating a 3D model just by taking lots of
pictures of a real object? Yes! Photogrammetry is back and it’s easier to
use, gives better results and is still completely free You’ve seen our first video about photogrammetry,
right? A quick recap.
You take pictures of an object from all possible directions, throw them at the photogrammetry
software, which tries to estimate the camera positions an creates a “point cloud”,
a bunch of 3D points that resemble the object To get a printable mesh you have to triangulate
the 3D points In the end, we clean the model a bit, patch
all the holes, slice it and we are ready to print! Well, all this sounds really good and simple,
but in reality, our original tutorial was quite complex Luckily for us, things have changed and a
new player has entered the photogrammetry ring Meet Meshroom Meshroom is free, open-source photogrammetry
software with a beautiful UI It handles the underlying framework AliceVision,
which is the result of cooperation between multiple universities, labs and a French post-production
company The base interaction is about as simple as
it gets, drag and drop pictures into the Meshroom window, hit START and wait for the finished
model However, with Meshroom you can also Augment
reconstruction That means you can add more pictures to already half-finished solution
when you notice in the preview that some area could use more detail And even better, with Meshroom you can do
Live Reconstruction! In this mode, you repeatedly take a set of
pictures, upload them to a folder and they get automatically processed A preview is displayed and you can decide
which part of the model needs more detail You then take more pictures and the whole
process repeats until you’ve captured the model from all angles But before we get to play with Meshroom, let's go over important steps that you should follow when you're taking pictures for photogrammetry Your smartphone camera will work just fine,
these two models were scanned with one But if you have a DSLR, that’s even better If you’ll be using DSLR, crank the aperture
to at least 5-6 so that the whole model you’re trying to capture is in focus It’s best to move around the target object
in circles, varying the angle and height after each pass During our testing, we often shot 50, 100
or even more pictures to capture every detail It doesn’t take nearly as long as you’d
expect It’s really important that you don’t move
the object between pictures If there’s people or cars moving near the
edge of some pictures, it’s not ideal, but Meshroom can handle it But try to keep these moving elements to minimum The object should make a significant portion
of each image and you can take close-up shots to capture delicate details If possible try to avoid hard shadows, taking
the pictures outside on a cloudy day is a great way to get even lighting from all sides You can use zoom or even mix pictures from
totally different cameras Meshroom is really great in this regard. Can you make a video instead? Yes, but don’t do it. Although technically it is possible to use
video rendered into individual images as an input for Meshroom, the quality is much lower
compared to a standard image the metadata will be missing and you’ll be inputting
hundreds of images Standard photos are simply better What if you took the pictures in front of
a perfectly white background and rotated the model between the pictures? This sort of works, but again we do not suggest it The results are simply much
worse compared to just walking around the model Ideal targets for photogrammetry have textured
or rough surface Capturing glossy surfaces is tricky, but If
it’s an option, you can cover them with some powder like flour or with painters tape to avoid reflections Ok, we’re now ready to begin the reconstruction You’re most likely going to use the standard reconstruction when scanning objects outside
or when you’re simply away from your PC In this case, let’s assume you already took
all of the pictures got home and now you want to reconstruct the 3D model The workflow is really simple Copy all of the pictures to a folder on your hard drive and then drag and drop the folder or the selected images into Meshroom window Save to project to a destination of your liking, otherwise the reconstruction gets stored in a temporary folder. Now you can either hit Start, but what’s
better is to right click the Structure from motion node and hit compute Actually let’s quickly talk about these
nodes Each one of them corresponds to an important
step in the reconstruction, the individual steps are very nicely described on the AliceVision web page You can manually begin a computation of a
node and it will automatically compute all nodes before it If you double click a computed node, it will
load the result Now that’s why it’s better to start by
computing the Structure from motion node Compared to the full reconstruction, this
usually doesn’t take long at all and you’ll get the reconstruction preview All pictures that got successfully matched will have a green check mark next to them And you’ll see the estimated camera positions
in the 3D view Depending on how well this looks, you can
either compute the full reconstruction or if too many pictures got discarded you can
augment the reconstruction or fine tune the settings which are very well described on
the AliceVision wikipedia The full reconstruction may take a while, the nodes at the bottom will turn green one by one as they complete. You may hit Stop at any time and resume the
reconstruction later If you took a lot of pictures it’s not a
bad idea to let the reconstruction run overnight Once the full reconstruction finishes you
can double click the Texturing node to preview the final mesh You can also right click on any of the completed
nodes and select Open folder Even though the output file format is a commonly
used Wavefront .obj which can be imported directly to Slic3r You’ll most likely want
to do at least some very basic clean-up of the model before printing it The finished model printed from Prusament
PLA Silver and Galaxy Black turned out really well, right? Let’s say you’re scanning something at home You took about 60 pictures and the reconstruction
is going well Except for one area, which you didn’t capture
well, some pictures got discarded and that part of the model is now missing detail With Meshroom, you can simply take more pictures
and add them to existing reconstruction It’s essential that you have not moved the
object between the individual series of pictures If you have moved it, augmenting the reconstruction
will probably be very difficult When creating pictures to fill in a poorly
captured area we suggest taking about 5 to 10 photos And you can try to fill multiple areas at
once New pictures get matched against the full set of photos That means that adding new pictures may even
cause previously discarded images to be successfully matched with the new series Whenever you add a series of pictures to an
existing reconstruction a new branch will appear in the Graph editor Again, you only need to compute everything
up to the StructureFromMotion node which is usually pretty fast As soon as it turns green, double-click it
update the preview When you think you have enough pictures for
the final reconstruction Right-click the bottom rightmost node and hit Compute. This is the original model and this is the
one printed from Prusament PLA Lipstick Red Like two peas in a pod Live Reconstruction is the most fun way to do photogrammetry Click on View - Live reconstruction to open
the setup panel Select a folder to which you’ll be uploading
new pictures and the minimum number of images that will be processed at each step. Now hit Start in the Live Reconstruction panel The first series of pictures should include
at least 10-20 images and should focus on the general shape of the object, let’s not
start by taking close-ups Once you upload the pictures to the selected
folder they get automatically processed and added to the reconstruction There is a small catch at the end You’ll have
re-link the last StructureFromMotion node to the PrepareDenseScene node. Right click the link and choose Remove And then drag a new link from the bottom most StructureFromMotion to the PrepareDenseScene input Now you can Compute the final mesh. Almost all meshes created by 3D scanning or photogrammetry Will have a hole at the bottom Luckily for us, we need a flat base that could
be easily placed on the print bed anyway, so a simple plane cut in Meshmixer, Blender,
3D builder or any other program that you prefer will do Secondly, the scale of the model will be pretty
much random, so don’t be surprised when the model is really tiny after import and
simply scale it up There is one technique in particular that
makes a perfect combo with photogrammetry and that’s sculpting We will make a separate video about in the
future but in the meantime feel free to check the tutorials we linked in the description And here’s the final result printed with Prusament
PLA Azure Blue We strongly recommend to check the Meshroom
wikipedia there’s a lot of information about how to solve some errors you might encounter and what parameters are worth playing with And if you want to contribute, that’s even
better, submit a pull request or contact the developers directly by email So are you convinced it’s time to give photogrammetry
another chance? Consider subscribing if you enjoyed the video
and happy printing
Thanks for the post I have been struggling to achieve photogrammetry but this makes me want to give it another go
Cool, but you still need a NVIDIA CUDA compatible video card right? That eliminates Mac users AFAIK.
Love Dvořák. The New World Symphony is one of my favorites.
Good video! here’s what you need: https://github.com/alicevision/meshroom
Yep, going to give this a try.
Have to teach Josef to spell Photogrammetry :-)
Can anyone speak to how this compares with paid solutions? Asking for work.
I wonder if the final step (where cuda support is required) could be sent a different system. I have access to systems with nvidia boards in them, but they don't have a way to do easily do remote GUIs. The idea is, get all the photos loaded and do the analysis on my laptop with intel video card, then send all the data to a system with an nvidia card to do the final mesh generation.
I just installed and ran a very hastily executed scan of a USB stick and I can confirm that it is easy to use and produces pretty good results. I can’t wait to do a proper job!