Unlocking the Potential: Mastering 3D Gaussian Splatting using Pre-rendered Images

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] ER software I have opened this 3D model of room where I can seek for a good composition and create some nice Arc quiz pictures with my Hardware it takes about 37 seconds the render one good quality image from this room with blender's own Cycles rendering engine but instead of spending time rendering each images individually wouldn't it be cool if we could rotate and view this model in this quality in real time I think we are getting close to it [Music] hello boys and girls my name is this time my interest focused on the topic of what advantages can be achieved when the already rendered material of a 3D model is translated into 3D caution splatting format this idea actually arose from a discussion that were written in the comments of my previous videos so far I have spent time outdoor shooting and scanning 3D models of real life objects I was given the idea of what this new technology would look like if it was applied to 3D images produced from blender or other 3D programs that can use high quality bath raising rendering technology at first I was a bit confused why would anyone want to convert something that is already in digital 3D format and train a 3D caution splatting bottle out of it but then I realized that it's all about the calculation speed one of the biggest advantages of 3D caution splatting is that it can present a very high quality 3D images in real time so if we do the heavy calculation work carefully and use the time it takes to render the model once we can probably look at the finished model and run it freely in full quality and make new camera animations from it in real time at least this was the theory of an idea which I proceed to test I decided to test two models for this process the first was this room the kind of often used in architectural visualizations and the second was a mock-up model of a soda can that are commonly used in product visualizations by the way I use this blender kit add-on which brings a huge library of materials and different kind of models to blender and in its easy to use menu system I found these suitable models for this experiment in order to get comprehensive rendering of a 3D model I decided to approach it in the same way as I would have scanned it in real life with real camera so in blender I made this camera animation that flies around the room and tries to capture the space from all angles and Heights I ended up build a 260 frames long animation in which the camera alternatively Rises up to the ceiling and alternatively decents close to the floor so that the room could be filmed as widely as possible from many different angles the fun part here was that in virtual mode with the camera you can move completely without limits and reach places that you wouldn't be able to photograph in real life situation as I have already learned from shooting situations in real life it's worth using a white focal length settings when scanning the space with an animation like this I thought I could achieve the necessary angles to make their 3D caution splatting model as functional as possible for the soda can I only focused on making an easy rotation animation where the camera rotates the object of a couple of times at different heights since this product model is relatively simply object with static water particles Frozen in place I counted 240 images to rotate it it took about 17 seconds to render a single image of this soda gun with the Cycles rendering engine after rendering the animation the next step in the process is to generate a point cloud from the sourced images an open source program called call map is used for that it works in a common prompt window and in this case we used ready-made python script for the conversion this script comes with the indria source code who have developed this course and splatting Method calculating the point Cloud takes its own time but it is relatively fast compared to the last step of the process where the courses planning model is trained with the original images and that point cloud data set training is the most consuming phase and it takes a lot of rear memory of the graphics card in training the process builds iterations with a maximum of 30 000 iterations it is specified in the basic settings that when the calculation reaches 7000 iteration the situation is saved and after that we continue to watch the accuracy of thirty thousand iterations I have many times had to face that the process often crashes because I run out of memory on my graphics card and I can't read the maximum iterations but I have noticed that if I even get to 7000 iterations the course in splatting model still becomes quite valid and looks really good so reaching the maximum value is not necessary it often takes about 25 to 30 minutes to convert the point cloud and train the model so overall the calculation of the room model took approximately three hours in total and little less for the soda can total about 1 hour 45 minutes so how was the end results was it worth all that rendering well yes I would say that this one looks confusingly good here is a screen capture from sibr viewer where the data set of trained course and splatting model can be watched here I'm controlling the camera from the keyboard and the view is running in real time at 16 frames per second as shown in this mirror in my opinion this looks absolutely amazing and it proves that this Theory also works in practice of course there are flaws in this model and it cannot draw all the textures perfectly but this is because I didn't manage to get necessary angles from every corner or part of this room model with my scanning animation but for the most part this gaussian splatting model is very clean and managed to look like I'm floating in real time inside the Cycle's rendering engine without that sample noise pattern that appears in blender viewport let's look at that soda can next this also feels and looks somehow incredible although here the flat blue background color producer peculiar floating artifacts that suddenly comes to the four when you try to view the subject from certain angles but even so this is an interesting sample with all these water particles which I think fits well in this caution theme because here a static recording is created that represents a frozen moment all in all I think there is a lot to digest and think about what this technology could mean in practice for example for game development and simply this might be a function that will be integrated into 3D programs in the future so that with a little pre-calculation we can view the models as high quality renderings in real time what kind of a thoughts do you get from this experiment leave your comments below and if you like this video hit the like button and subscribe to my channel I will definitely continue to research this topic see you again next time thanks for watching [Music]
Info
Channel: Olli Huttunen
Views: 37,774
Rating: undefined out of 5
Keywords: Nerf, neural radiance fields, Radiance field, Ai, 3D model, 3d Rendering, Camera movement, camera fly, Nerf model, scannig with 360 camera, 3d scanned, created with ai, Ai modelling, gaussian splatting, cgi to 3dgs, cgi to gaussian splatting, pre-rendered imagaes to 3dgs, SIBR viewer, Convertion, real-time rendering, real-time, ArchViz, Mockup, Product visualization, architectural visualization, process, colmap, Blender, path tracing, Cycles rendering
Id: KriGDLvGDZI
Channel Id: undefined
Length: 10min 30sec (630 seconds)
Published: Mon Sep 25 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.