Apple's impressive entry in to photogrammetry: Object Capture API

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
with monterey mac os latest release apple democratizes photogrammetry for those of you who might not know photogrammetry is the process of creating a 3d model out of a series of photographs you can either use your phone or a regular camera snap a series of pictures and then the software figures out the rest for photogrammetry you need dedicated piece of software but with apple adding the object capture api right into the os things are now a whole lot easier and even though monterey is still in beta we already have an app taking advantage of the api it's called photocatch it's totally free and it's the application i'm going to use to test out apple's implementation let's start photogrammetry constructs 3d data by detecting the same point and multiple images this is the reason we need to take a lot of pictures with considerable amount of overlap that way the software can detect the point through each one of the photos there are several ways to approach the capturing process one way is to snap pictures as we move around the object this is perfect if the object is big for example a statue and we have no other way of taking the images if though we're shading smaller scale objects we have better ways to do that we can use a turntable and snap pictures as the object slowly rotates around once one side is done we turn the object around and complete the same process for that side the more pictures we take and the more detail they are the cleaner and higher quality the final 3d model will be just to give you an idea most of the models i created for this video were done with around 70 to 100 photos for smaller objects this is usually enough to create a correct representation of the object but not enough to capture a lot of detail especially if the object is quite complex apple has a dedicated page about photogrammetry where they share their workflow along with sample images and 3d models i'll have the link in the description below apple uses the turntable method to capture the object and going through the photos on apple's website we get a hint of what the target audience is and that's regular people and small businesses people who just want to quickly shoot a product create a 3d model out of it and then upload that on their website or social media channels there's a couple of things that point me to this direction first off the actual images they're not using a fancy camera to take the pictures they're just using a phone which is what most people have in their position they also don't go crazy with the amount of images taken the photos are enough for a nice 360 view of the object but not perfect for zooming in and digging into details it's basically the amount of photographs a regular person would be willing to take without driving themselves nuts they also don't really try to control the lighting hitting the object they just probably use one light and a big light diffuser and that's about it from the lemon meringue pie example we see a lot of reflections on the pie which is not really how a 3d artist would grab these pictures with all these reflections we basically bake the lighting information into the texture a 3d artist would cross polarize the light so they would only get the diffuse part of the object without any reflections i'll explain what that means in a little bit but here's the gist of it on the left we have the object shot with a regular light and on the right we have the object shot with cross polarized light notice how the cross polarized image shows no reflections this is exactly what we need we just want to capture how the surface looks without any reflections from the light setup or the environment the reflections and the light information will come after once we start lighting the object in our 3d scene so for these reasons i think apple is trying to make the argument that photogrammetry is something anyone with a really basic photo equipment can do and to be honest i totally agree with that sentiment of course you need to be a little bit organized but before this video i never really worked with photogrammetry i knew the basic principles but never really tried it myself and i had an almost 100 success rate with any objects i photographed i'm also happy to say that the api is definitely capable of producing detailed meshes of course you need to take more photos but if you're willing to spend the time the algorithm can give you detailed meshes they just chose not to showcase that now let me show you what kind of equipment i used it's a combination of high tech and super ghetto all at the same time basically i'm using whatever i have in my studio so in some ways it's not that different from apple's target audience i only had to buy one key component for my setup but it's not something that will break the bank the camera i'm using is the gh5 and chances are you already have a better camera than this one it's a 4 year old camera with a micro four third sensor so it's not a monster camera by any means but it definitely does the job for the background i went with black but as we can see from apple's photos it's not really needed the smooth surface background along with the smooth surface of the turntable does not register at all on the software so you could easily go with that but just to be on the safe side i went with a completely black background next time though i will try a white one my guess is it will also help with bouncing the light around which is certainly helpful if you don't have a great light setup next up is this motion control system here it's the genie mini 2 this is definitely not something everyone will have in their position but you also don't really need one it just makes the process a whole lot easier basically any cheap turntable will do i believe apple is also using a simple one the nice thing with a motion control system is that we have complete control over how many photographs we want per one 360 rotation that way we can get a consistent overlap which is quite important for photogram tree and as a plus we also get a little bit of automation by connecting the jenny mini to the camera the motion control system can automatically trigger a photo unfortunately the cable i have is not long enough and i also couldn't find one for my camera so the automation part is out of the question for me as well i can still though set a delay for every rotation so at least i have a pause in between rotations enough to trigger a photo manually syrup the company that produces the genie mini has a really nice accessory which screws on top of the system and gives you a really nice and stable platform that's all well and good but i don't have that so i need to improvise thankfully i still have the packaging and it looks like the top part is perfect for the job it wobbles a little bit but with the weight of the object it'll be stable enough you gotta do what you gotta do as you can see i also painted it black in an effort to blend it with a background next up is the lighting we need to have complete control over the light source so we're going to get rid of any lighting from the room i will use a single light to light the subject i'm using an led light which is basically rows and rows of leds you'll also need a light diffuser to reduce any shadows and scatter the light but i also didn't use that i do have a light diffuser but i don't have the proper setup to mount both the light and the diffuser at the ankle needed so if you don't have that just make sure that the light you have is strong enough to overpower any other light in the environment and also make sure that the light is right in front of the object that way any shadows produced don't cover parts of the object with a light hitting the object from the front we do get some shadowing but it's as reduced as possible if we place the light on the sides the shadows would obstruct several areas of the object the reason we don't want shadows is to ensure that the software will always have a clear view of every area of the object this is also why it's nice to have a light diffuser it can soften all shadows ensuring maximum visibility for the software now here comes the important part capturing images with cross polarized light just as a reminder cross polarized light helps us capture the true nature of the surface without any reflections for this we need two things a polarizing filter for the lens and a polarizing filter for the light using the lens filter alone can only get us this far as we rotate the filter you see that we can cut off some of the reflections but not all of them we need to polarize the light source as well in order to completely eliminate reflections and speculars this is a linear polarized filter and it's just the sheet you place in front of the light i was a bit limited and all the suppliers that they were delivering by the foot couldn't deliver in grease so my options were pre-cut sheets so i bought this 50 by 50 centimeter sheet which doesn't really cover the entire surface of the light so once more i had to improvise for mounting the filter i'll use this incredibly high tech method sellotape and paperclips and for masking the leds not covered by the sheet i'll use a piece of cloth now with the filter mounted we just need to rotate the polarizing filter until all reflections are gone and that's it now our light is cross polarized and we can now shoot a completely diffuse object now that we're all set up it's time to start taking some pictures i want to start with a simple object first see how that goes and then go from there so i'll start with this bark slice first which i think won't take that many pictures to complete i'll just shoot 24 images for this side and then flip it around and shoot 24 more images i'm curious also to see how the algorithm will handle such a thin surface it could completely fail but let's see after that i'll continue with these stones here which will also probably be super easy to do if that goes well i'll experiment with this garlic object here and finally this super clean and well maintained shoe let's go [Music] [Applause] now that all photos are taken let's see how things go in photo catch the application is incredibly simple you just choose the folder where all the images are then choose the quality and that's it you just let it do its thing and hopefully once it's done we have a good looking object that was super easy [Music] hmm [Music] uh [Music] [Applause] i'm quite surprised things worked this well i didn't really face any issues the pictures worked incredibly well for all objects and i consistently got a clean result straight out of the app i quickly converted the materials to redshift and the few rendering tests i did show great promise the objects look really nice and they render fine straight out of the app which gives me the boost needed to go for the more complex object let's give this shoe a try [Music] [Music] that's amazing first try and this shoe already looks great we lack some detail but that's expected since the images are long shots of the shoe now that i have the basic shape there i'll try and see if i can get some more details out of the images the plan is to shoot some close-ups for the shoe and hopefully this system will be smart enough to combine these into one single object in theory it should work since that's how most photogrammetry applications work but i don't know how apple's algorithm will behave i'll first try shooting a few pictures of this area here see how that works and if everything's okay i'll shoot more close-ups of the entire shoe [Music] wow it did actually work we definitely have a more detailed object it looks like the system captured enough detail to also define the cloth underneath the shoe but that can easily be fixed what matters is that the mesh definitely has more definition it's interesting to see that the algorithm failed on the rest of the shoe even though i'm using these same images as before i'm guessing they use the low res photos for the overall definition of the shoe and the high detail ones for the individual parts so if high details are missing for the rest of the shoe the mesh coming from the long shots is disregarded i'm not exactly sure how it all works but my guess is that once i shoot more close-ups of the rest of the shoe the model won't have any of these holes let's find out and here's the final result i'm really impressed there are some problematic areas like the lack of an inner sole here or this area here the sole underneath also has a texturing issue but i know exactly why these things happened and it's entirely my fault i basically didn't follow my own rules the light as i started taking close-ups was in the way so i moved it to the sides of course that's a big no-no by doing that i added these huge shadows to the object and in combination with the black background the algorithm couldn't figure out the overall shape and the same goes for the texture i didn't get a clean capture of the sole in this part so the only thing that's there is this black area i got a little bit too excited and i wasn't extremely careful with the captures definitely something to improve on with the next object so now that i have all these objects let's try to make a scene out of them it's going to be a super weird scene but it doesn't matter just want to see how it will all work out [Music] and here it is it's definitely the weirdest product shot ever i definitely should have washed the shoes before i started this but i'm quite happy how it all turned out especially considering it's the first time i'm actually trying out photogrammetry next step is to try and go for super crazy details and also go through the whole procedure that means re-topologizing the mesh and creating a low-res mesh that uses displacement maps for the details definitely something for a future video in general i'm very impressed by apple's first photogram tree implementation it really works and it works incredibly well it's quite awesome it's also really amazing to see how people already take advantage of the api we're not even out of the beta and we already have an app that works great anyway i think that's about it for this video i have all the equipment i use there's links in the description below so if you want to give photogrammetry a try you'll have a good idea where to start i'll leave the link of the photocatch app in the description below as well and with that we've reached the end of this video take care and i'll see you on the next one [Music] [Music]
Info
Channel: Dimitris Katsafouros
Views: 153,722
Rating: undefined out of 5
Keywords: Cinema 4D, OSX Monterey, Apple, Photogrammetry, Photocatch, Redshift
Id: vh2LOEG9Zf8
Channel Id: undefined
Length: 16min 34sec (994 seconds)
Published: Sat Jul 31 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.