VR Optimization and Performance Tips for Unity

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome i hope you're doing well today we're going to be talking about performance improvements you can make for your virtual reality project we'll be covering optimizations for models textures as well as looking at some additional project settings to improve the visuals and performance of your project but before we get into the specifics i'd like to thank unity and arm for their support and sponsoring this video when it comes to optimizations we typically need to define what our goals are these goals are technically called performance targets and can vary depending upon your project and target hardware but as an example let's say we're going to be using oculus quest as our target hardware we can go to oculus's website to find their recommended target ranges we can see that we need to get 72 frames per second 150 to 175 draw calls and 750 000 to 1 million triangles per frame these are some good general guidelines that you can keep track of to ensure you're within the recommended metrics range when developing the first thing that we're going to be taking a look at is optimizing geometry one of the most common optimizations for geometry is using level of detail models or lods for short at runtime we switch between a predetermined set of models based on how far the viewer is from the object each of these models differ in the amount of geometric detail they have we then order these models so that the detail of geometry decreases as the distance to the camera increases from a more practical point of view when an object is smaller on screen it's rarely the object of interest we want to be putting our resources into what the player is most likely focusing on and since the object is smaller on screen it's also going to take up fewer pixels we don't want to be spending additional runtime resources for an object the player may be unable to see for creating the lods they should be similar in silhouette and appearance but also be different enough in their use of geometry to increase performance also don't worry about needing to create these additional lod models by hand there are plenty of additional software solutions for automating the creation process we can see that here as we move further and closer to this rock we can see the changes in the visible wireframe of the rock to implement lods on an object we'll first need to add an lod group component once we add the component you'll need to child the appropriate models to switch between to that game object we then select a section of this lod graph on the component and drag a renderer into it once that's done we can move our camera to see the model change in real time these models due to their simplified geometry can also prevent the over sampling of a model that leads to geometric aliasing which i guess this is a good time to move on to aliens now most of us are familiar with the concept of aliasing it's what create those incredibly familiar jagged edges usually seen on the edge of objects geometric aliasing can happen in a couple of different ways most commonly win2 high contrasting pixels are near each other on screen and there aren't enough samples to go from one color to the other this setup can create those hard transitions that form the jacket edge geometric aliasing can also occur if the geometry where rendering is too dense for the space it occupies on screen naturally since the object is smaller on screen it uses fewer pixels while still trying to generate all of those edges and transitions between them if we utilize lods we can reduce the number of necessary edges and transitions thus decreasing the presence of geometric aliasing for fixing the more common form of geometric aliasing this can vary depending upon what render pipeline you're using but we can do this using multi-sampling anti-aliasing or msaa for short for universal render pipeline we'll need to select our render pipeline asset and go to quality anti-aliasing and we'll be selecting the four times option this is the sweet spot for cost and performance so there isn't a significant reason to increase it further one extra thing to take note of is if the hardware uses an arm male gpu msaa is nearly free at runtime now that we've talked about geometric aliasing let's take a look at its lesser talked about specular aliasing you may have noticed it but it was occurring on the walls behind the pillars from earlier specular aliasing can occur when we have high specularity levels or highlights on a thin bezel or edge we can see that here when the highlights appear and disappear between frames giving the edge an appearance of shimmering in vr this is incredibly distracting since it's nearly impossible for the user to ignore it and when we look at the object's wireframe we can see that the corners are beveled these corners create those hard edges and thin pieces of geometry that can cause specular aliasing realistically there isn't a simple fix for this we ultimately have two different techniques that we can use to mitigate it the best we can we can model the objects to avoid sharp or sudden edges when available and utilize smooth round shapes we can also use matte materials that don't have a high specularity for our metallic objects however for these techniques to work effectively they need to be communicated early on with your artist it'll save both time and resources from needing to alter the game's assets or art style in the future as a quick example we can reduce the specularity on a per material basis by adjusting the smoothness or influence of the metallic or specular map alright i think that about does it for the geometry based stuff let's move on to textures firstly we want to make sure that the textures we're using in our project are using the proper import settings for texture compression go with astc since it provides the best quality to size ratio there are a few different ways of handling this you can technically do this on a project basis within your build settings if you're in a rush however it's much better to handle texture compression on a per asset basis this process avoids having unnecessary texture data and it's generally good to leave textures used for interfaces uncompressed let's take a look at how we can set up our textures individually if you have a texture asset selected go to the android tab override for android and compressed astc if you'd like some additional detail on this arm has a nifty table on their website covering file size and compression and more detail but now that we've given unity a texture with the proper compression let's talk about how unity uses that texture for mint mapping whenever you import a texture into unity it goes through a process of being downscaled and filtered this process creates a series of smaller textures based on the initial imported texture these textures are then used on distant surfaces to minimize texture aliasing and improve our performance since we're not using additional texture data now mipmapping is typically enabled by default but we can double check this by first selecting a texture in our asset window going to its advanced settings and making sure that the checkbox for generate mipmaps is activated while we're here let's set up our texture filtering as well since texture filtering relies heavily on mint mapping where we are essentially blending between different levels of our mipmaps however we have some different options to choose from for how our mid-maps are going to be blended we have point bilinear tri-linear with optional anisotropic filtering now that sounds like a lot but it's pretty simple point basically means that there's no filtering bilinear will sample the nearest mint map while trilinear will sample between the two nearest mint maps and give us the smoothest transition we can change our texture filtering by selecting a texture and going to its advanced settings like we did before and going to its filter mode for vr i'd recommend going with tri-linear since it avoids a noticeable pop between mint maps than if we were using the bi-linear option the additional anisotropic filtering option helps with viewing slanted surfaces this option could be somewhat of an expensive addition so i'd reserve it for environmental assets and setting it to a relatively low value all right i don't have a good transition for this but we're still going to be talking about textures let's talk about bump mapping now where for more rough or complex surfaces we can utilize different methods of bump mapping the most common form of bump mapping that you may be familiar with is normal mapping this technique works incredibly well for communicating surface detail for most use cases however in vr normal maps aren't as useful for communicating these details this issue is due to the constant change of the viewing angle of the player in vr we also have the addition of needing to render two images one for each eye this rendering process is known as stereoscopic vision and is specific to vr the illusion of a normal map is not as reliable in vr since we're using one texture to render the two different viewpoints it's so much better than a flat material but other options are available to us our other primary option is parallax occlusion mapping now that's a big fancy word but it's quite simple in unity we can implement this using a height map the significant distinction between a normal map and a parallax occlusion map is that this mapping factors in the viewer's angle with the surface we can get better results using this additional functionality but it is a bit of an expensive process and should be used sparingly for important objects that the player will view up close all right now with all the explanation out of the way let's look at implementing these two options for normal maps we'll need to select an imported texture in our assets window and look at our import settings in the inspector for our texture type we want to make sure we've set its import setting to normal map and then drag and drop the texture into a material for the height map we can set this up in a similar way to the normal map and at the time of recording is available within the built-in urp and hdrp render pipelines now that we've talked about geometry textures now let's look into something that brings all of this together shadows now shadows are incredibly important to the depth of your scene however shadows can be very computationally intensive especially on mobile devices lighting is also a very complex topic so we're specifically focusing on lighting static objects it's a relatively simple process where we just need to mark any non-moving objects and are seen as static then ensure that we've set all the lights in our scene to baked let's then go to our lighting panel and set our lighting mode to subtractive since it's the most performant option also it's generally better to be working with fewer larger textures than several small ones when it comes to light maps so let's set our texture size to 4k as well finally let's press generate lighting in the lighting panel this process will create a series of light map textures that show at run time arm also offers an interesting resource on creating dynamic soft shadows using local cube maps calculated in unity beforehand we essentially create a cube map used as a mask to describe how light can enter space this effect is incredibly low in cost at run time so if you're looking for a simple real-time lighting solution i'd recommend looking into this you can see the supporting documentation and asset store link in the description when we're working with lighting we need to be aware of artifacts known as banding banning occurs when we can't correctly show the required color within the number of bits we have for a pixel the lack of information for each pixel can turn what was initially going to be a smooth gradient into a series of color steps known as bands these bands can be tiring to the player's eyes since the eyes need to adjust for each step this artifact may be difficult to see in the video so let's make some adjustments in photoshop you may be able to see each of the steps here if we increase the contrast of the image you can see the banding much easier we can mitigate these visible steps by enabling dithering or tone mapping dithering adds noise to the scene to break up the visible levels that we're seeing if you're using universal render pipeline you can navigate to your camera rendering and then to the dithering setting while tone mapping attempts to remap these values from a high dynamic range to a more usable low dynamic range this works well for screens that do not support hdr if you wanted to tackle this using tone mapping you can create a new volume profile and add the tone mapping override i then recommend going with the neutral setting to start finally you can also technically enable hdr on your camera however this addition can significantly add to your render time on mobile hardware and i would avoid it one of the last things we're going to be taking a look at is alpha compositing a technique of combining an image with its background to produce a single image the technique produces an appearance of transparency and is most commonly used for foliage to reduce aliasing this technique is handy to us since the previous anti-aliasing methods we've covered don't work in this scenario we're specifically looking at a method known as alpha to coverage or aotc for short what this technique ultimately does is use the alpha component of a fragment shader and the multi-sampling mask the multi-sampling mask gives us additional samples to determine the overall color of a pixel if you recall since we're using four times msaa we have four additional samples that we can use for each pixel now we then simply check the color of each sample to see if it has a color value if all four of the samples contain a color then the pixel is fully opaque if only half of the samples have a color value then the pixel is somewhat transparent and has an alpha value of 0.5 these additional transparency levels give us a much cleaner result than what we would typically have to do this we first enable alpha to mask on an opaque shader at this point we technically have what we're looking for but this isn't a complete implementation to get a better result we'll also sharpen the edges within the fragment function finally we want to be using linear color space since it works nicely with physically based rendering or pbr for short whether you're working in the built-in or universal render pipeline unity utilizes pbr forward lit materials i say this because it is important to note that linear color space works better with pbr than the alternative color mode gamma we also get an additional benefit of reduction in specular aliasing since the gamma color space can quickly become overly lit and blown out we can change the color mode of our project by going to the project settings color mode and setting it to linear and i think that's about it optimization and best practices is indeed a complex topic but i hope you learned something that you can use for your project i want to thank unity and arm once again for their support if you'd like some additional info on what i've covered i've linked to arm's documentation down below that's it for me i'll see you all in the next one
Info
Channel: Andrew
Views: 46,819
Rating: undefined out of 5
Keywords: getting started with VR, VR development, VR in unity, VR unity, unity, vr for unity, VR Tips for Unity, VR Tips, VR Performance, VR Optimization, LOD, Mipmapping, Aliasing, Texture Compression, Texture Filtering, Bump Mapping, Banding, Alpha Compositing, Color Space
Id: xqgt9W4Zrjg
Channel Id: undefined
Length: 14min 21sec (861 seconds)
Published: Wed Mar 24 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.