How Does the Hedgehog Engine Work?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] do you know what the Hedgehog engine is I'm sure you've at least heard of it and I'm also sure you're all familiar with Sonic Unleash Generations Lost World Etc but how many of you understand how they work how many of you ever had the Curiosity to learn how the engine was developed and how it produces the graphics it does well there is a 1hour long presentation from 2009 explaining exactly that yet most people only seem to watch one minute of it because of a supposed PC version of Sonic Unleashed well if you do have the Curiosity to learn about it don't fret because today I'm going to explain to you how the Hedgehog engine one and two as well for forces and Frontiers Works diving into its backstory development and specific workings that compose it and since there aren't any videos explaining how it works in a more simplistic and easier way to understand I went through the effort of learning it to make this video I want to stress that I'm not very knowledgeable in the area at all so I tried my best to assimilate the concepts and the specific nomenclatures so I'll also try my best to explain everything correctly and in a clearer way than what's available online if you notice any error on my explanation or any wrong thing please feel free to correct me in the comments so let's get into it the year is 2005 yoshii shimoto who had originally been hired as an enemy programmer for Sonic Adventure was assigned as the director for the next big Sonic game Sonic next gen or Sonic 06 as it was to become known as since he had worked on the past several Sonic games he analyzed past mistakes and decided what needed to be done to move the series forward a new groundbreaking Graphics engine so his team instead started working on the Hedgehog engine which sh Nakamura then being assigned for the actual new game it's believed that the engine was originally meant to be used by Sonic 06 as it was meant for the next big Sonic came and for the new seventh generation of consoles which supported more and better graphics but as we all know the development of 06 was really short thus rushed so the engine wasn't finished when the game's development began Hashimoto is also credited in 06 with special thanks without apparently being involved in any way or form anyways the main motive behind the engine according to Hashimoto was to achieve in-game graphical Fidelity on par with pre-rendered CGI particularly The Works of Pixar as you may guess for a video game this wasn't an easy task especially for the available technology at the time so how do you exactly go about this at first they thought they couldn't do it and the artists behind the project were in absolute disbelief at the prospect of even trying it as it sounded impossible for the specs of the consoles they just weren't powerful enough so they had to get smart at the time an influx of games with better more realistic Graphics were coming out and they might have a use of quote unquote tricks every game did and still do to achieve it such as normal Maps which are used to add details without adding more polygons Shadow Maps HDR which is used to simulate how our eyes react to light does more Faithfully represent colors and exposure shaders Etc they looked good on the surface but if you went to closely inspect them they looked flat they liked 3D they're just using PS2 graphics with normal Maps it looks richer than what was the norm before but it's not good enough especially in areas where objects are not hit directly by any light source Shadows nowadays Shadows are one of the most important elements in graphics to make your game look realistic and at the time they didn't look realistic so the developers ask themselves do they look flat because the only colors you can see in them are those from the diffuse texture the diffuse texture is that one that contains the color information so you could call it the main one so the answer they got from this was we need to make everything look 3D like pre-rendered CG then what are the differences between pre-rendered CG and realtime CG so the difference between a rendered scene and normal game Graphics they looked at allegedly thousands of CG images and came up with some conclusions the main difference was that pre-rendered looked more 3D in all areas with noticeable depth due to concave areas for example being depicted as darker as they realistically do that's why an archway or a pillar look darker as the light travels through them and easy to understand example is Sonic Heroes which even though it's a really beautiful game for its time it's pretty noticeable that its illumination algorithm is based on Purely casting simple Shadow maps on other surfaces making everything look kind of disconnected and like it doesn't belong in the same place there are only two intensities of color total brightness as in their me appearance or Shadow which makes no sense and why doesn't it make sense because light doesn't behave like that in real life when light hits a surface the color of that surface will be cast on another surface which is known as a light bounce if light hits a blue wall its blue color will be cast on anything close enough to it if you put yellow curtains on a window and they cover its entirety the entire division will have a yellow tint it's really simple but it wasn't easy to implement on consoles at the time much less in realtime computer graphics and that's also the most noticeable difference between pre-rendered and realtime CG bounce lighting well what does this mean then that the key to high quality visuals is reflections solutions that were used to make things more realistic were ambient occlusion which really only darkens concave areas with no consideration to light Reflections thus it's still not realistic at all but may be useful depending on the art Direction hemispheric lighting which is basically ambient light a constant flat light source but with consideration for normal Maps so it takes away the flatness at the start but it's not really meant for terrain rendering which was the main focus and vertex colors which are literally colors embedded into the models themselves with no need for textures which can be seen in some objects for Sonic Heroes so it lacks precision and any sort of expression and it wouldn't have any Shadows that way so the method they ended up going with was Global illumination GI is a group of algorithms that are meant to add more realistic lighting to scenes also known as indirect lighting so it's used for light to be reflected on surfaces and colored light transfers from one surface to another basically it's everything else that goes on with the lighting besides what the normal light source already does in a game like Sonic Heroes which is just illuminate and create Shadows with some pretty basic effects besides that it would be necessary to render the result of the light reflections with the highest possible resolution except that is inviable because it would be slow as so the next solution is to pre-calculate these Reflections and bik them into the textures themselves which can be seen in this in Unleashed or this one in Generations you can see that even before these leaves load the wall already has a green Shadow on top of it which goes to show that everything you see in the stud is not real time lighting it's just really clever texture work okay I've shown you the result but how do you even do this so the team spent an hour talking about what light even is and how it works they talked it through and started working on this algorithm and after 2 months of trial and error they ended up with this a realtime scene of what would be the Shamar Hub World fully working on the PS3 except really really slowly of course there is a lot of stuff that is very different compared to how the engine works in the games such as the sun being really bright but comparing to What was seen in any Sonic game th far and in all games perhaps it was groundbreaking you can see that the concave areas the are is with more dep at curving sides are properly darkened and even the light that passes through the leaves becomes green having decent quality Shadows so the developers were excited with what they had done expectations were high except this would be just the start of a very long and painful Journey okay but how does this really really work because you need to have some way to calculate the global illumination would you use rate racing not real time at this time of course I mean pre-rendered like a lot of games already used radiosity Photon mapping no they didn't understand any of those they were just buzzwords so they created their own unique method but first a proof of concept A light model think of light as a particle with energy that travels in every direction with these light particles traveling in a straight line after being emitted from a light source so basically array once a particle hits an object some of the light's energy is temporarily absorbed by the object depending on its material what's not absorbed is then split up and retransmitted so one particle becomes 10 for example the problem with this is the following assume that the light source emits multiple particles one of them hits something and that one will spawn 10 others from the Collision points that will then hit another object spawning more particles and more and more then one R becomes 100 then 10,000 then a million and then 100 million how the do you render all of this it's simple of course you don't it would take a quote unquote superc computer more than a 100 years to render something like that you need to limit the number of light bounces so back to the drawing board they came up with another method a light Network every surface of a model is subdivided into micro faets being the equivalent to a Texel which is for a texture the equivalent of a pixel for a normal image which will belong to a texture containing Global illumination information so how they interact with light and how it will leave it a GI map for each micro facet shoot a number of rays defined by a tool in 360° then store the information for each micro facet that was hit by a ray let's say 100 rays are shot out of a micro facet these 100 rays will connect to 100 other micr facets until every single one has been hit creating a network limiting the number of light bounces does the number of possibilities making it possible to get something renderable in a reasonable amount of time what this basically does is directly transmitting the color of an object to a surface depending on which micr facets the race that will leave the color land so bounce lighting and this works for all types of light sources used by the engine this may sound really confusing and I don't really understand more of the technicality of it to give a more thorough and comprehensive explanation but the results are obvious the light leaves the Sun hits an object and that object reflects the light on another object that's not directly hit by the light source the source will hit any object it can and everything that can be seen by it is turned into a shadow which will be more realistically lit by the reflected light of everything around it so here are the results of this method as you can see the red railing reflects the lights to the white wall and Sonic is noticeably green colored in Savannah Citadel due to the leaves the effect is also Smart in the sense that it respects materials and their properties as I've mentioned before such as in the scene where the light goes through the yellow cloth and reflects it on the ground in a scene with only GI textures you can see how the light is meant to travel around the models and how it reflects into other surfaces having noticeable depth and pop compared to old games even without having any textures as it's obvious you can say the same thing for a scene with no Global illumination even though it's using diffuse and normal maps and basic shaders of course they complement each other so if you combine both of them you get what we're looking for there is a lot of other stuff happening behind the scenes especially for the GI textures which at this time weren't 100% accurate when generated automatically so the artists had to manually view them and paint high quality details into some of them but this already reduces a lot the manual labor and the production time all of this was just for terrain because it's still necessary to make it look like the characters belong to the environment so they created light Fields first is automatically subdivided into big and small box-shaped cells according to the terrain shape each of the cells vertices has color data that's automatically assigned based on the terrain's global illumination data so it basically gets info on the general color around it the effect this has is for example how if Sonic is close to a yellow wall with pink flowers and green leaves on top a mix of those schors will be smoothly reflected on top of him as if light was bouncing off of them onto him without the light Fields he'd look like he does in sa 1 sa 2 or Heroes where sometimes it looks like he isn't really part of the stage because there is no algorithm to adapt him to wherever he's at now this still looks like a lot to render and it was the T range from our subor was 500 by 500 m in area having a GI calculation time of two entire days and GI texture data that amounted to 100 megabytes this might not seem a lot but you have to consider that an entire stage ranges from 5 to 20 kilm which would make the GI textures fluctuate from several 100 megabytes to at least 1 gab with the computation time amounting to several months they knew this could be a problem before they even started tackling Global illumination though they didn't really know how far they were going to push it they pushed it a lot so here's what they did since it was going to take lots of computational time they needed a distributed computing setup with multiple computers that way it could distribute workload through 100 PCS majorly reducing the time needed to create the stages improving the speed from 10 times to 100 times this way they could tackle GI calculations for big levels in one or two days which is still kind of nuts to think about nowadays or so they thought to render the the levels they needed several gabt of diffuse maps to create the GI Maps as well as about 100 GB of data from the light networks until everything was done which in around 2006 for 32bit PCs using slow mechanical hard drives was a pain in the ass they solved it by acquiring several 64bit PCS bumping up the ram to 16 GB in all of them which at the time was relatively Overkill buying better hard drives and compressing the light network data using a compression format that developed managing to reduce it from 100 to 70 GB on average which took the second phase of processing from days to Mere hours the process was so tasking on the entire system they had built that they blew the circuit breaker on their office so they literally couldn't use any other appliances otherwise the breaker would pop and all progress would be lost that didn't solve the problem anyways they had to upgrade the entire power system in their office you might be wondering well if the data was so large how did it fit into the game if the data was to be too large then the data trans for Speed would be really slow and the game would easily break the first thing they tried was to split the GI Maps into map maps which are basically three versions of a texture one small one medium and one normal and the game renders a specific version depending on how far away you are from it but as you may have guessed doing that for every single texture will create a lot of clutter internally and waste unnecessary space so they created a tool called GI Atlas which gathers up neighboring small textures to create a 512x 512 version eliminating wasted memory space and speeding up transmission they also packed every level within a single file to reduce the number of files and optimize streaming either way the game doesn't have the high quality GI Maps because they wouldn't fit in one disc and the team wanted the game to be easily accessible so that's why the shadows and lighting effects can look noticeably low quality if you get too close a remnant of this can be seen in the preview build for Sonic Unleashed where the game uses the hdg maps and is apparently missing its second disc thus a lot of stuff appears to not be working anymore a fun but weird fact regarding this is that the highquality GI maps are present in the DLC stages that's why they look noticeably more detailed in comparison to the normal ones by the way the presentation they gave on this also shows a proof of concept of Camelot Castle using the engine meant to represent how much of a difference good lighting makes for low poly environments the game also makes use of backface calling which consists in not rendering the faces of a model the camera can see as well as manual rendering of certain areas of the level to reduce lag which gets called by going through specific Collision objects in the stages that's why if you use a glitch or a mod to skip huge portions of the level the level geometry won't render properly because the game didn't call the function to render it this is all fun and games but how does this actually render is there a way we can visualize it yes yes there is there is a very useful tool called Nvidia ight that lets us take a look at exactly how the GPU goes about a rendering process the first thing the game does is render the shadow maps from the perspective of the light source rendering the objects that are furthest away first and then the ones that are closer so that every object may have a cast Shadow then the game goes about the depth buffer this is necessary to represent the depth of objects in the scene it would be intuitive to think that the objects closest to the camera would render first to reduce the number of calculations necessary and speed up rendering except that's not what happens here the game prefers to render the smallest part of the models first even if they'll be behind a bigger model making them invisible to the camera which is unusual I would think this doesn't affect performance much as the game makes use of calling as I've said which would make these parts and render themselves as the camera can see them and also because the level is rendered in small portions so I guess that's the smartest way they could have gone about it to keep their artistic vision and that level of detail then some elements such as foliage railings grates fences Etc basically the objects with less light absorption get drawn with no Global illumination as well as the character model the GI then gets applied and everything gets rendered in about the same order as the depth buffer with the Skybox being the last stage element to be added the actual last thing to get rendered is the HUD and after that's done we get a finished scene well nvidia's Insight allows us to take a look at stuff like the depth buffer which is nice but it doesn't actually let us see how the engine works properly to do that we can instead use the free camera mod for Sonic Generations first the level geometry and objects are loaded only using their diffus a normal map as well as basic shaders and lighting then the GI texture gets applied and the GI algorithm along with the light Fields get loaded making the Green in from the grass bounce on the tree trunk here and giving the bushes noticeable shadows and depth afterwards the basic shaders get updated to low resolution versions of the real shadows and then highr versions are subsequently applied for indirect lighting which can be seen on the trunk again and then for the direct lighting which can be seen on the floor and on the rock formations giving the SC its final details and much needed depth the Hedgehog engine has also been updated in recent games to the Hedgehog Engine 2 the L now supports real-time lighting which the first didn't hence why they had to render everything beforehand PBR which makes everything look more realistic and actually supports detailed protrusions instead of needing to exclusively rely on normal maps for everything it's basically the Hedgehog engine one but majorly updated with more realistic and updated principles not having to rely on Art Direction so much to achieve the desired result nor the magic tricks the original deps had to come up with also now making use of image based lighting this consists in taking information from Cube Maps which are literally Cube images of anything you want which are used here for Reflections and projects their colors on everything around them the light fields are still used for the same purpose but IBL helps a ton for the general look of the environment for Sonic frontiers of course the first thing it does is to render the shadow maps and the depth buffer for the entire Island then it switches to the camera position generates the depth buffer again and begins rendering every object starting by Sonic with Bas lighting at the same time the normal Maps get drawn which appear to be doing a lot of work for the Rocks here flow maps to give Sonic fur and the other stuff such as the light from the platforms a flowy look and what appears to be roughness Maps or other types of shaders with ambient occlusion for nail textures Alpha channels and other stuff probably being loaded at the same time but you can't really make it out without extensive analysis nor am I really qualified to do so finally post-processing effects get applied like Anta listing and Bloom and then bam Global illumination the last thing to get rendered is also the Hut which is business as usual it probably looks a bit in my footage but that's some sort of conflict with inside for some reason the biggest difference between the two engines is that the game actually gets rendered with no lighting algorithm or cast Shadows which showcases the new realtime lighting for the new engine whereas in the first one as I've shown before the text had lighting information and already had Shadows drawn on them before objects were even loaded on top so that's why Global illumination gets applied so late into the rendering process here of course the way everything works is very different in Frontiers compared to other games because the art direction is usually different but I couldn't get it working on forces but just know everything literally works the same way but there are probably different tricks at play between these titles Graphics are one of the most important things in every Sonic game even if it's not the first thing that comes to mind when you think of a Sonic game the they are the main principle of their entire art Direction and it's them that makes the games we love look how they do it's them who makes the stages Charming epic memorable and even nostalgic and it's pretty interesting to learn how they work and how they aid in creating these atmospheres so I hope I did a good enough job at it and I hope that you learned something too I also hope you enjoyed the video and if you did you can support me by subscribing and activating notifications as it does really help me out a ton as I put a lot of effort into these videos if you can spend any money you can also support me for as low as150 on patreon while getting early access to videos your name at the end of the screen and other parks that will be soon added there I also have a Twitter if you want to follow me there and check out my community tab once in a while as I post quite frequently there and that's about it really I'll hopefully see you soon [Music]
Info
Channel: Cifesk
Views: 164,596
Rating: undefined out of 5
Keywords: cifesk, sonic the hedgehog, sonic lost media, sonic stages, sonic level design, level design, sonic retrospective, sonic frontiers ost, sonic unleashed ost, sonic game retrospective, sa1, sa2, colors, colours, frontiers, sonic 2, sonic cd, sonic 06, sonic heroes, sonic forces, sonic generations, shadow generations, sonic mania, sonic lost world, sonic frontiers ending, sonic rumble gameplay, sonic superstars, hedgehog engine, hedeghog, sonic graphics, sonic dream team, unleashed
Id: oEwbloJOStA
Channel Id: undefined
Length: 22min 6sec (1326 seconds)
Published: Sat Jun 01 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.