3D Gaussian Splatting for Real-Time Radiance Field Rendering

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi this is the short video presentation of 3D gaussian splatting for real-time Radiance field rendering we propose to use 3D gaussians as a new representation for radiance fields we show that 3D gaussians preserve desirable properties of continuous volumetric Radiance Fields while avoiding unnecessary computation in empty space we start with a set of cameras and a point Cloud provided by structure from motion during camera calibration next we optimize a set of 3D gaussians to represent the scene finally we render transparent anisotropic gaussians and back propagate their gradients to their properties after the optimization the 3D gaussians often take on Extreme anisotropic properties to represent the very high frequency geometry like vegetation gaussians are a compact and fast representation in this example we scale down their extent so we can see that the spoke of the bicycle can be represented with a handful of gaussians here we visualize the progress of the 3D gaussian Point Cloud during optimization in a time lapse in summary 3D gaussians are the first to achieve state-of-the-art quality real-time rendering and a fast training and all of these at the same time here we show some interactive sessions recorded in our lab with an a6000 GPU please note that real-time rendering can also be achieved with less powerful Hardware we ran an extensive evaluation and used multiple data sets bibner 360 tanks and temples deep blending and Nerf synthetic we also compared our algorithm against recent methods like bibner 360 instant NGP and plenoxyls in our quantitative evaluation 3D gaussians achieve overall equal and sometimes better quality than the best models that are slow to train and render while 3D gaussians maintain fast training and an order of magnitude faster rendering here we compare side by side with several algorithms in many cases we are better than mipna 360 while rendering faster than 100 frames per second we achieve higher visual quality than instant NGP with similar training times and fewer failure cases we also did the careful ablation study to evaluate the different design choices of our algorithm here we show that even if we stop the training in seven thousand iterations which takes approximately six minutes we retain great visual quality we also show what happens if instead of using the sfm point Cloud for initialization we initialize with a random set of points sampled uniformly in the scene another important element of our method is the anisotropy of the 3D gaussians which has a big impact on the final quality thank you for listening please visit our website for the paper the code release and all the supplemental material
Info
Channel: Inria/GraphDeco GraphDeco Inria Research Group
Views: 125,792
Rating: undefined out of 5
Keywords: SIGGRAPH, computer graphics, graphics, NeRF, novel view synthesis, IBR, Point Rendering, Point Based Rendering, Interactive, Fast, Transactions on Graphics, Inria
Id: T_kXY43VZnk
Channel Id: undefined
Length: 5min 4sec (304 seconds)
Published: Tue May 02 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.