Unleash the power of 360 cameras with AI-assisted 3D scanning. (Luma AI)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we can do so many things with these phones nowadays but what about these 360 cameras is it any good for 3D modeling [Music] hello boys and girls I'm olithutunen and this time I decided to dive in some trendy neural Radiance field s about technology that revolutionize three-dimensional scanning and the creation of the 3D models neural Radiance field AKA Nerf is a step forward from a typical photochrometry modeling where you try to make polygon surfaces from a bunch of photos this new method harnesses the AI to make some calculations of the environment recorded by the camera and produces kind of a volume model from it that we can then rotate and explore inside the three-dimensional space it certainly sounds confusing but when you decide to start researching the topic you will notice that it opens up completely new possibilities right now the way I see it there are only two ways to train and create these Nerf models by yourself one is somewhat complicated and challenging route that requires you to able to install python programs and run several terminal commands and the other is considerably more user-friendly cloud service called Luma AI Luma AI offers an easy to use app that you can download to your phone and start creating Nerf models through it you just choose a good object that you want to model and then just scan it basically videoing it from all Sites by moving around it Loops very simple after shooting the video is sent to lumia's Cloud for processing and after about 30 minutes you can rotate the model freely and look at it from different angles [Music] with Luma AI service you are able to create new camera movements inside your scanned environment and render quite compelling retakes without returning to Axel shooting location compared to photochrometry Method neural Radiance field is able to show Reflections and transparent objects which otherwise have been difficult to present in photo modeling until now shooting objects with your phone is one thing but what you really should be thinking is that can these Nerf models also be trained with other cameras as well and the answer is yes if you go to your computer and log into your Luma area account you can upload all kind of materials through web browser and your scanning doesn't need to be only a video you can also feed still photos to Luma if you have multiple takes of your subject you just package them into a single sip file and send that for processing but here's where the things gets interesting when you go through the instructions on luma's website you will find a description there which tells you that you can also do nerve scanning with a 360 camera Luma AI understands large shots taken with a double fish eye lenses and it also supports the so-called equirectangular image format which is a typical overall image produced by a 360 cameras compared to shooting with a phone or a regular DSLR camera the 360 camera sees much wider areas and since this type of camera is often used with a selfie stick it's much easier for you to reach up high or lower the camera the suitable low angle to capture your subject from all sides I am using an insta 360 camera where that stabilization is very good and the Horizon lock feature keeps the image leveled no matter which way the camera itself is rotating these features are the most important when you want to scan Nerf models with 360 camera because scanning is based on circular motion around the object shooting with the 360 camera you don't have to worry about whether the subject stays in the center of the picture in post-processing we can easily rotate the video image so that the target is definitely visible during all rounds with the insta 360 Studio application you can easily edit the shots and for example drag the object from the video to keep the subjects in the center of the image after you've finished you just render this out as a normal HD video and uploaded it to Luma service since the 360 camera can see everywhere you will inevitably also be included in the picture especially if a complete equal rectangular image is used however this is not a problem because since you are constantly moving in the relation of the background Luma AI removes you from the picture and only all things that stay in place remains in the final model therefore possible Shadows that are created when you shoot in the sunny weather and temporarily are casted over your subject when you go around them do not necessarily spoil the scan although this is not recommended the best condition for shooting is overcast weather when there are few Shadows visible and the subjects are evenly lit also you should not always rely on the feature where you as a photographer will be erased from the model in the process those places where you wear in the picture always leaves a special mark on the model the removal produced of the lumpy and uneven Distortion in the 3D model that's why you should always try to stay behind the backside lens and try to point the front lens towards the object [Music] so when is it then a good situation to use full Aqua rectangular images using a full 360 image becomes useful in the situations when you want to capture a tight spot where you can't go around any objects such are for example narrow alleys or corridors then you should position yourself below the camera so that both lenses can see as much as possible in both directions or perhaps a good example is this kind of a scene which is scanned inside the car where the selfie stick is used to pull the camera through the gapping of the car from the open side windows the Nerf model built from this gives you an interesting opportunities to build camera movements that would be impossible to fill inside the real car [Music] here a few more notes about using Luma AI web service one thing that is good to understand is that neural Radiance field can also be translated into a surface model which is generated of 3D polygons and when we look at and rotate the model in the browser it is exactly that a mess model that can look very uneven and lumpy and you can see its strange distortions in the details of objects [Music] understand is that this model shown in the browser is only a low poly version much more data is stored in the Nerf model itself and these deformations are smoothed out when you render a video out from the model radius field looks and feels very different than the typical 3D mesh and it can produce almost the same kind of result as the original video from which the model was built in these examples I have been interested mostly about the surrounding environment but are these scant models any good if we separate them out from the background and Export them into another 3D program there is a few surface model options where you can decide whether you want to download low medium or high poly versions of your scannings but back in the question how does these scans actually look when they are opened in a 3D program well of course they are not that accurate you would probably get much more better results with the typical photochrometry method but they also have a lot to clean and you have the patch all kind of holes smooth out uneven surfaces but these Nerf models are very broken and there are so many loose vertices that the model will Fray immediately if you try to soften its surface with the sculpting tool so these scans are not very useful in this format yet we have to remember that this is very young technology after all and it is certainly developing at the fast pace like everything in the field of AI at the moment either way the Nerf modeling challenge you to think about 3D from a little bit different perspective right now probably the most interesting thing is the option where you can download the Nerf and opening it into Unreal Engine as a volume model this lays in front of you whole bunch of new possibilities where you can for example light your environment in different ways and use unreal's great camera features with depth of fields and all it's very exciting even the real uses is still seeking its purpose a bit this technology is very fascinating and it will be super interesting to see where these neural Radiance Fields will lead us in the future with this I at least found a new purpose for my 360 camera I recommend you to try it also it's really fun I hope you enjoyed this video hit the like button and consider to subscribe to my channel and I'll see you next time goodbye [Music]
Info
Channel: Olli Huttunen
Views: 64,610
Rating: undefined out of 5
Keywords: 3D modeling, NeRF, Luma Ai, Artificial Intelligence, Machine Learning, Computer Graphics, 3D Rendering, 3D Scanning, Photogrammetry, VFX, insta360 One RS, insta360, 360 video, 360 camera, neural radiance fields, neural radiance field, 3D modelling, using 360 camera for modelling, Blender, create 3D scannings, 3d modeling tutorial, luma ai tutorial, 3D web browsing, polygon models, Mesh models, Unral plugin, Shooting with 360 camera, Editing 360 footage, Equiretangular image
Id: kV0OAvlXShk
Channel Id: undefined
Length: 11min 50sec (710 seconds)
Published: Thu Jun 15 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.