How I Solved Real Time Motion Blur

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in computer Graphics the motion blur effect is added to computer renders when wanting to simulate the effect of a real world camera shutter resulting in the blurring of objects that are at motion along their Direction with the amount of blur correlating to their speed times the section of time we want to simulate the shutter being open for in the world of real-time computer Graphics motion blur usually comes in the form of a postprocess effect that takes a set of buffers each showing different parameters of the rendered environment at a single instant to then generate the desired effect the buffers or images being utilized are usually the color buffer which is the color you would see on your screen without the effect the velocity buffer which shows the velocity of each pixel and the depth buffer which shows the depth of each pixel a most basic real time motion blur effect would take the velocity buffer and for each pixel's velocity average values of multiple pixels from the color buffer at regular steps along that velocity and return the averaged color as the final output this on its own yield sufficient results for a basic motion blur effect and while requiring some attention to prevent artifacts such as ghosting and off centering of the blur can stay rather simple in its implementation and for many applications may be sufficient the main limitation with this method however is that the blur effect itself is bound within the Silhouettes Of The Objects being rendered and as those stay the same size no matter their velocity you can imagine that at higher velocities and lower refresh rate the effect can fall short in its realism and the Silhouettes Of The Objects become more obvious than desired the reason for this discrepancy is that the larger the velocity relatively to each slice of time the larger the distance that the object covers in that time and thus the larger the distance that the object needs to be blurred along its velocity to emulate the effect accurately this means that in order to get close to realistic results objects that have any velocity need to be blurred along that velocity at distances that extend beyond their original Silhouettes and as a result beyond the original velocity values in the velocity buffer for that object to solve this issue we can apply a process called velocity dilation which takes an original velocity buffer and extends the Silhouettes of these velocities based on them the methods I am aware of that achieve this are the one used by Unreal Engine which I have heard uses a gan filter to blur the original velocity image entirely achieving an extension in all directions which can then be filtered in later stages and this paper by Morgan Maguire from 2012 called a reconstru filter for plausible motion blur in his paper Maguire selects the most dominant velocities within larger tiles and uses these tiles when blurring velocities instead of the original ones the second part of his method requires intricate weight coefficients to then achieve a realistic blurring effect using both the dilated velocity tiles and the original velocities as input both of these methods can have limitations mainly revolving around the maximum radius of velocity dilation these offer in unreal engine's case while I don't know know enough I can imagine that a larger dilation radius would result in sacrifice of details in Maguire's case the dilation radius is limited to the size of the tiles which would also sacrifice the level of detail that can be achieved and extending the dilation without enlarging the tiles would increase the complexity of his algorithm exponentially by default Maguire's method achieves a complexity of Big O of KNN over M which is a linear correlation between the desired dilation radius and the processing power required to achieve it and another problem I personally have with the existing implementations is the centering of the blur around each pixel this means that instead of an object being blur backwards based on its velocity it is equally Blended forwards and backwards there are a few reasons behind this decision the obvious one being that the required dilation radius for the same velocity magnitude can be cut in half another reason I believe is behind it is that blending of the image and Edge case handling can be done more reliably when dealing with dilated velocities rather than a bare silhouette Edge my problem with this design choice is in the resulting effect extending the blur beyond the object position and at rapid velocity changes can be quite obvious and ruin the immersion in addition this goes against the intuition of real life motion blur as motion blur captured by real cameras is retrospective in nature also there are many reports of motion blur causing nausea in users that view it for extended periods and I believe that the non-intuitive nature of centered blur ing can contribute to that similarly to how seasickness can be caused by subtle discrepancies between viewed and felt motions over long periods of time lastly I propose a theory that the disdain many viewers have from the motion blur effects mainly around real-time game like applications is partially caused by the center blur for the reasons I have previously stated a bit of background before I get into it I am an independent developer with a passion for shaders and Graphics programming in general I have no academic background and my prior experience with generating motion blur effects was limited when I saw a message in the gdau game engine Discord asking for help making a proprietary motion blur effect for their future release my interest peaked and I decided to give it my shot a bit on my previous iterations my first attempt at a motion blur with velocity dilation was using the velocity data from the previous image to achieve the dilation this was better than no dilation but was obviously limited by the fact that objects usually travel larger distances than their size even at higher refresh rates another attempt I made was trying to explicitly write each velocity value to pixels that would be affected by it I would like to clarify that I know how gpus and parallel processing prevents this from being possible but I gave it a shot anyways cuz if it worked it would have been the approach showed in this video obviously you cannot write to buffers in a non-continuous fashion when working on the GPU due to its parallel structure which is the core Point behind the GPU so this meth method did not get far then I reached the final idea and the method I have worked on and perfected since and that is utilizing the jump flood algorithm to dilate velocities in a velocity buffer in short the jump flood algorithm is a go-to when it comes to creating sign distance fields and voro diagrams the reason being that it turns a big O of end problem to a big O lend problem as long as that problem is prioritizing data points over others in large data sets based on continuous conditions what I mean by this is for problems where you want each pixel of an image to be aware of the pixel it is the closest to in a silhouette you can use the full power of GPU parallel processing to only perform log base two of n passes with n being the larger side of an image this is perfect for generating vorno or sign distance diagrams because they are usually desired in high quality and are based on continuous distance values this algorithm partnered with the high parallel of a GPU is so efficient that it can be run on Full Resolution images at run time to generate things like high quality Dynamic sof Shadows for example chances are you probably seen one of those in game before I need to clarify I cannot claim to have invented this method but I also do not reject the possibility that I have on account of never finding any example of it looking online I do not have any access to academic research papers and do not have any exclusive access to game studios and their methods so it may already exist all I can safely say is that I have reached it independently out of my own research and experimentation this is what my method looks like in action at the top left you can see the RW velocity data fed into the jump flood passes at the bottom left you can see the offsets of the Velocity buffer that indicate the resulting dilation at the top right you can see the dileted velocity buffer and at the bottom right the motion blurred color output using said buffer here is the motivation behind this method and why I believe it is superior to its predecessors as stated before the jump flood algorithm is a big O log of n solution instead of n being the size of the image we only care about the maximum radius of the desired velocity dilation this would mean that to increase the reach of the dilation we only need to increase our amount of passes made by the algorithm logarithmically instead of requiring four times the amount of processing to get from a range of 64 pixels to 128 I only run the jump Flo pass one more time and then another to reach the radius of 512 pixels which is easily nearly a quarter of most modern computer screens width in addition because the continuous nature of the algorithm I am able to generate results at Full Resolution meaning each velocity gets dilated perfectly around the Silhouettes Of even the most detailed geometry and thus I lose no detail in the final dilated velocity buffer lastly in exchange of of a negligible reduction of the resolution I can multiply the starting size of the steps in the algorithm meaning instead of starting at a step size of one pixel and reaching values of 128 pixels wide after seven passes which would strain any modern GPU I can start with the step size of eight pixels which would only slightly reduce the dilation quality but would cut the number of passes to four in total note that 128 pixels at high definition resolution is way more than usually desired with this effect and the faster your frame rate the smaller that the radius needs to be meaning that at 144 HZ refresh rate on a high definition screen you would not need more than three passes in total in addition these passes are identical meaning they can be run in quick succession with no rebinding of buffers and the only thing needing changing are the push constants for the Shader to know what pass index it is on I am not a person of academic background in fact I have dropped out of high school but I did my best using McGuire's paper as a reference to put together a paper on the subject which I have linked a PDF download for below I have also released a demo project showcasing the effect on GitHub which would also be linked Below in addition I have linked all the sources I have used in my research and in this video in the video description I hope you found this video interesting and I'll see you in the next one
Info
Channel: Sphynx_Owner
Views: 22,341
Rating: undefined out of 5
Keywords:
Id: m_KvYlYF3sA
Channel Id: undefined
Length: 10min 19sec (619 seconds)
Published: Sun Jul 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.