Redshift presentation at the SIGGRAPH 2017 NVIDIA booth

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone I'm Rob Slater I'm a co-founder of redshift rendering technologies and we make the world's first biased production ready GPU renderer so we've been around for not very long and 2016 was another very good year for us we saw a growing popularity amongst freelancers and wider adoption amongst high-profile film and TV production studios so last year Digital Domain became a customer they have a great reputation for producing fantastic work and we're really excited to see what they do with redshift and Blizzard actually were the first customer of ours to push redshift really hard with really heavy scene assets the end result was the entire episodic content for their cinematics for a popular overwatch game we think these cinematics are really truly gorgeous and in fact they actually got quite a lot of attention on our booth at sea graph last year and you can check them out on YouTube if you're interested and also polygon pictures became our largest largest customer in Japan they have a reputation for doing a fantastic cartoons and animated series and they chose redshift to render Amazon's new lost in odd series they were very happy with the results and in fact it won three Emmys very recently and also we brought onboard cosa VFX late last year they do a TV series like a West world Marvel's agents of shield and Gotham now I'm perfect i'm actually very thrilled to be working with these guys because i'm a fan of their shows and they were all very excited about our future with cosa VFX and also in the pipeline were working with some more high-profile studios there are some TV series coming out and some movies that we can't talk about today but maybe we were able to talk about them in our talk next year okay so one of the things that people have been asking for is the more awareness of redshift has grown is that they want more like integrations into the DC C's other than the Autodesk big three like meieran 3d studio max and soft and marg so with that we actually brought out a Houdini plugin earlier this year in January that proved like a very popular it was a very smooth plug-in process and hot on the heels of that is a cinema 4d plug-in this is come out very recently it's still technically an alpha even though it's being used in some productions and we actually wanted to keep this in alpha for as long as possible because we wanted it to be a really strong integration and also late last year we started our katana plugin this is in a closed alpha it's a it's going very smoothly it's being tested by some large studios right now and hopefully within the next couple of months we'll be able to release it to the public so within DC C's in general we had like some requests for the Mac OS support and now naturally because like cinema 4d is very popular amongst Mac artists we like prioritize for this so that we could get redshift for Mac OS out and this actually came out in spring and I think it's actually that was alpha and now about a week ago we released it to the public for you guys to try and also 2016 I saw some more third-party support with redshift so Peregrine labs bought Yeti with redshift this is something that people have been asking for for quite some time so finally it's out there we're very pleased with that and also we've got a request to support the Golem crowd generation plug-in now this is popular amongst like like for like TV high epic TV shows and movies actually like integrating with golem involved addict adding new technology in redshift we actually wanted to prioritize for this because we felt this would bring redshift closer to the feature parity of CPU renderers we're going to talk a little bit about that technology later and of course with growing popularity with more questions and requests for more tutorials so we hide some artists that were familiar with redshift to produce some quality tutorials and release these on YouTube a few months ago so one of the things that we try to do with redshift throughout the year is as well as adding new features or improving things we like to make sure that the performance is up to speed because that is our our selling point redshift is fast so last year we brought out redshift 2.0 and with that we bought like a new redshift material which featured like multiple brdf Slyke Beckman and GG X and also a physically correct for an L for rough reflections adding these new features actually took a little bit of a hit on shader performance so this is something we wanted to look at and see we could fix so one of the challenges that we have when programming for a GPU is that GPUs actually have limited resources and this means that you can write very complicated code which will still run when authority will run significantly slower than it should do so actually going in so we went into the code and we basically rearranged it and found some like bits that were not very smart and actually came back about 10% of the performance on the GPU and I don't know if any of you guys probably not remember our first talk at GTCC back in 2015 but we announced proudly they were using function pointers to execute the air shader nodes of the material shader graph now this was a very convenient very flexible solution for us but unfortunately it had come with an overhead that was not very good actually so we decided to scrap function pointers and switch to giant switch statements instead now if any of you out there are programmers you'll know that a giant switch statement is a pretty ugly looking thing but thankfully it made rent that the rendering actually render faster so we're stuck with that and also we have some long term goals for optimizing how the shaders communicate with the raytracer this actually gave us the biggest win overall about 40% performance improvement so with all these things that I've just mentioned here we actually got almost two times faster rendering over redshift 2.0 another criticism that we've had from redshift is that when you would have like many a Ovie's in-flight while rendering the beauty is that that could actually be quite slow now we use the GPU to do all the heavy lifting like shading tracing and texturing but we use the CPU to post process a Ovie's so we try to do this smartly we use all the multi-threading capabilities multi-core capabilities of CPUs but it was not as efficient as we would like and we actually found that there was some pretty serious bottlenecks between the GPU and the CPU like one was waiting for the other and vice versa so we decided to go in and dig deep eliminate all those bottlenecks so that the AOV processing can run more imperil l with the GPU rendering and that actually gave us a 2 times speed-up and also from some of our large customers that have very large proxy miles we got some complaints that they were very slow especially if they were being transferred over a network so we decided to actually try and improve our file access system in general by using a caching system that this gave us like a good performance benefit and smarter do compression so this means that you can compress these giant proxy files transfer them over the network which makes it you know way faster and we actually got like a huge win with this about ten to a hundred times faster for our large proxy file processing and also very recently recently we actually found there were some books in redshift especially when if you had like shadow rays that were tracing transparent objects we found that the tactical texture caching system was not being used very efficiently there so we actually managed to improve that and fix that bug which came back white a bit of performance and we're also doing some research right now to improve important sampling overall especially for blended materials now this is a bit of R&D and we think that we can get blended materials running about three to four times faster in the future so watch this space so recently we brought out redshift 2.5 this basically supersedes veget of 2.0 and it's going to bring new features and general visual improvements so the first improvement is with noise detection so it's been people have complained in the past that are unified sampling does not clean the frame very well but when you have like very bright areas and very dark areas so basically the bright areas will clean up really nicely very quickly but the dark areas would actually stay very noisy and now how artists would fix this is that they would basically have to either reduce the variance threshold or they would have to increase the number of samples and sometimes we really have really heavy scenes that have to do both and what this would mean is that yes the dark noise would clean up but if the frame would render significantly slower so that's something that we wanted to fix so with redshift 2.5 we've introduced an improved variant metric now what this means is that you're going to get like of a more balanced noise reduction across the entire frame for both bright pixels and dark whistles and we want to obviously get this to be faster as well so just to give an example we've got like a very simple dome lit scene here you can see that bright area is cleaned up very very nicely but the dark areas here very noisy still this is with is he using 2.0 in using the default variant metric of a point zero one and this was running about 19 seconds so to try and clean up that noise we reduce the threshold to point zero three there sorry zero zero three and as you can see it got like a nice and clean but unfortunately actually rendered twice as long which of course is unacceptable and the redshift 2.5 with the new importance metric we're using it back to their default threshold of point zero one and now you can see that the noise is generally kind of like even all over the entire frame and the dark au is a nice and clean too and more importantly this renders back at now at 20 seconds so fireflies everybody hates them all rate racers suffer from fireflies redshift included and our fireflies when you have like a very rough surface reflecting something that's very glossy that can see a very bright light or very light incandescent object now due to like low probability you're going to get these hot nasty Firefly so as you can see here there's a nice flash it's not nice this is horrible programmer art it's mine but even more horrible in there at those nasty white dots those are fireflies so how do we clean up fireflies so in a redshift 2.0 and prior to that our only solution was to have like a very aggressive secondary ray intensity clamp which means anything that's like super bright would let brought down to like a lower intensity and as you can see it cleaned up the fireflies quite nicely but you actually lose a lot of the lot of the detail of the vibrancy has gone there so this is something we wanted to fix so in 2.5 we've added a roughness clamp as well on top of the glossiness sigh on top of the intensity clamp and what this has given us now is that we get back the reflections in the scene and some of that vibrancy we also cleaned up the noise as well so this actually rendered with the same number of samples as that and you can't let much better result I have to say to get that kind of noise clearing it with this using 2.0 you'd have to use about 60,000 samples which of course is ridiculous right so less energy lost fewer samples required means faster rendering and better looking results so one of the new features that we had for 2.5 with some material flexibility features so as I mentioned earlier we were integrated with the Golem plugin now one of the things that these this plugin required was per object user data so this is basically where you can store any kind of data any kind of attribute on an object like on a mesh like scalars colors vectors integers that kind of thing and you can then use them on the shaders to actually get material variations without actually having to have different materials which can be like very cumbersome to set up now to facilitate this we've added some new shader nodes so we have some shaders to actually read that object data and also a shader switch node now what this shader switch node gives you is the ability to have different shader graphs for different switch values which you can read from the objects and that means that you can say use the same material for like say different textures for example you might have like different textures for different objects like have like smiley face and a sad face or something like that so this this can do that like really efficiently it's easy to use and they're about 10 inputs on it but if you want to be more like go crazy and have a more complex shader graph you can actually chain these to be more flexible one of the things that was very important to us when we added this was that it was very efficient to we we didn't want to actually burden regular shading by adding this new technology and thankfully we did that okay another feature so particles so we've always supported mesh instance particles and these are very flexible they look really good but they actually come with like an overhead of using more memory than we would like and as I mentioned before like with user data we also have per point particle data as well that means you can like vary the color of particles and this data cannot go out of core so this was a problem so we added some simple particle primitives that are built into the ray tracer itself so initially this is going to be like sphere primitives and these are perfect for like very fine-grained effects like sand or dust or like spray that kind of thing these things can render incredibly fast at very lightweight so you can get like hundreds of millions of these particle primitives without actually losing a lot of memory and best of all they can actually go out of core as well which machine instance instances couldn't do all right automatic memory management is finally here I'm not lying so we actually announced this back last year at sea graph this was going to be coming out of vegetable to point oh but we decided to put this feature on hold so that we could like concentrate on improving 2.0 in general and adding new features of a 2.5 anyway what this feature gives you is the ability to get redshift to actually tune the memory budget between textures rays and geometry to get the best possible performance so a very good situation for this is like where you've got like one box but you've got two different GPUs in it so you might have like a Titan with 12 gigs and a Quadro with 24 gigs and then you want you know you can't actually go in there and man you'd like to how you're going to use that memory for those both those cards it's not easy so redshift will do that for you with automatic memory management to get the best performance and also very recently about like two weeks ago we had a trace set now this is like power users this allows you to be able to choose what the surface can see either through reflections or refractions and now this this technology is actually deep inside the ray tracer so it was very important that this was fast we didn't want the Ray tracer to actually get slower by adding this feature because sometimes you know more flexibility means worse performance but we actually found a way to actually have no performance impact so we're very pleased with that so trace set okay so redshift has been criticized in the past as being you know very very fast at doing final frame rendering but kind of sluggish when it came to like interactivity like IPR mode or scrubbing shader balls that kind of thing so this is something that was very important for us to fix for 2.5 so we worked very hard on this so we actually want to get like a real-time feel for like playing with redshift and looking at things in IPR now we're like X video game programmers at redshift so we don't use real-time lightly you know to us it means 30 to 60 frames a second which of course is silly for like final frame rendering but instead this feels like real-time rendering and with this we actually brought out of redshift viewport now currently this is available on 3d studio max and c4d is going to be in Maya and Houdini very soon as well and now what this gives you is the ability to first and follow us get the data very quickly from redshift to the viewport so you can see which of course is crucial for interactive rendering and also adding some features like render a snapshot history so you can compare frames and also like a Ovie's has been some of the people that asked for as well they want to be able to see a Ovie's in the frame buffer without actually I was like don't look at them in nuke we're also considering adding like effects like depth of field so you can basically take like a your mouse and like pick a point on the screen and then red ship will automatically adjust the camera for you to get that depth of field effect and then maybe we're going to add some effects like bloom and things like that as well later enough in life people asked for it we've just added the redshift benchmark tool as well so this means you can take a proxy scene file and then like render it and get some nice easy-to-read stats back so you can actually compare the relative merits of different graphics cards now initially this is going to be like the redshift command line there's no GUI here but in enough people ask for it then we'll add a user interface for it too okay coming up custom a Ovie's these are very powerful things these are just around the corner probably but I would say about a month away so what this gives you is the ability to be able to store any data from within a shader graph at any point to we file and so with that we're going to be adding two new shader nodes to redshift we're going to be adding a node to actually take the data and store it and we're also going to be adding a node to actually like allow data to pass through without affecting the beauty if you want to store it later and we'll talk a little bit about the benefits of that in a second so one powerful thing of having custom a of either you can do like true multi-layer material compositing in post this was something you could not do with 2.5 sorry 2.0 and now you can do that with custom a Ovie's and also so we're going to be able to do like very efficient aov pass rendering so an example of this is that you know like an artist might come to us and say hey I want to be able to like generate like an AO AO V or like a full screen like a oh how do I do that and unfortunately there's no easy way of doing it so we'd have to say you have to go and like make a new pass replace all the materials with an AO material and then away you go so now this is a come very cumbersome thing to do and it's also very slow because you have to render the scene twice so with with a customer V Tech that we're going to be adding this is not going to be a problem anymore so you're going to be able to define any kind of say override for like an AO V like AO for example and it's going to actually generate it concurrently with the beauty so of course easier to use and faster as well just after that we're going to be adding per light a Ovie's this is something that power users have been asking for were very excited about adding this feature and also summer puzzle map multi maximizations as well like right now this can be pretty like very slow because it renders things in multiple passes so we're going to be reworking this so it can render more things concurrently so you'll get your puzzle mats out much quicker and also olympic procedural support is coming very soon so this means you can take like you know like raw Alembic data without actually having to like you know like unpack it in the actual DCC and transfer it around which can be like quite a long process you can take this lightweight files and let redshift take that data itself when it needs it which of course means you know scene extraction is going to be faster - and scene is going to be more lightweight okay so coming in the future the much promised shader sdk some of the larger customers have been asking for this for some time and a lot of people from Blizzard have been asking for it we hope to get something to you out there probably right at the end of the year to test in alpha there's a lot of thinking that's to go behind this but we want you guys to be able to write your own procedural shaders instead of asking me to write them for you so you know this is a priority for us to get this ready for you guys tuned shading is something that's just around the corner as well we've got some very good ideas about how to do this we want to be able to get all the kind of light features that you want from toon shading with very good performance as well and raytrace subsurface scattering another very important thing so there are two reasons why this is important firstly with point-based you can't get that fine detail around noses and stuff like that you can get that with ray-traced and the other thing is is that our progressive rendering does not support subsurface scattering but with raytrace solution it will do now we're actually working on this right now we're working on technology to be able to get any kind of subsurface scattering profile including the old ones and new ones that we've not thought about yet rendering very fast in the ray tracer so that's something that we're working on a move other numbers are very encouraging and distributed remote rendering is coming soon as well and we're also going to be adding support for barn door area lighting like directional lighting we can like focus an area light direction x-gen instances as well as just around the corner as well as Maya curve support Oh free educational licenses are coming this September so you students out there just get in touch we'll give you some free licenses and you can play with redshift okay that's it for me please contact us for information if you have any or come and visit us at our booth 231 if you want any questions
Info
Channel: Redshift3D
Views: 64,762
Rating: 4.9161906 out of 5
Keywords: redshift, gpu, new, feature, nvidia, cuda, 3d, improve, improvement, photoreal, fast, quick, render, renderer, rendering, graphics, red, shift, redshift3d, update, presentation, siggraph, 2017
Id: GgIyeP28FIg
Channel Id: undefined
Length: 21min 40sec (1300 seconds)
Published: Mon Oct 23 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.