AI images to meshes / Stable diffusion & Blender Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello guys a couple of days ago I've uploaded this video with our weird looking geometry and get a lot of requests and comments to break down the technique of how I made these meshes using stable diffusion with uh little betoffs on my little work and now I'm going to show you the whole process basically it's a simple displacement of projected to the images with depth maps and these depth maps and to the images we can get using AI so without further Ado let's go this point I will assume that you already know how to generate images using AI tools like stable diffusion and journey or whatever now we need to get them for this we can use control net extension in-depth mode now we can take our image and without any problem generate and the processor will analyze our image to build depth path oh yeah this is our depth and we can save it actually now like until that but uh if you don't have control net you can use uh Zoid depth model from the hacking faces it works online you basically the rpr image here click on submit and after this you have that save it like I understood depth uh whole blender we need to create our own plane um as divided by on that yeah this is a pretty dense geometry and it will be enough for our use use case now we need to apply this place and modifier and select our yep map and so on as a display structure and I think they can tell net depth looks uh better all right and Shady smooth now I can create material apply our and apply our 2D image as the base color yeah you have your geometry to get rid of this uh sharp edges we can apply subdivision surface modifier yeah this works for me and now we can mirror object like uh like so maybe like so but so we need to and get rid of this part so we mirror it by z-axis and bisect to get rid of this um I miss it Parts again sculpt our mesh um Pierre and this is pretty much it this is our geometry we can apply a mirror modifier and as you can see here there is ah 200 000. triangles and we can decimate this Duality to optimize it because we don't need as many Pike [Music] oh oh maybe yeah so and for the last part I'm going to show you how to generate additional maps for this Shader process is very simple you can generate it using substance designer or substance painter but I'm gonna use Shader map just drop your own main texture to the Shadow map and choose your normals you can increase density I think an inverted flips better uh and then you can generate your specular color and specular map for the shiny for the shiny surfaces yeah just save this map and save this map now we're cutting I'm not apply normal normal apply our our specular map to the specular for now I will disable base color and here as you can see we've got normals working correctly and other specular Maps as well [Music] so yeah this is uh pretty much it you can generate as many meshes as you want with this technique and arrange them into composition and I think this is a pretty cool way to quickly keep watch concepting or stuff like this thanks for watching and goodbye foreign
Info
Channel: DIGITAL GUTS
Views: 73,885
Rating: undefined out of 5
Keywords: stablediffusion, biopunk, ai, blender, 3d, unrealengine, metahuman
Id: tBLk4roDTCQ
Channel Id: undefined
Length: 6min 49sec (409 seconds)
Published: Mon Jul 03 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.