5 AI tools for 3D You need to Know About!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in today's video we're gonna talk about some AI tools for 3D modeling that you might find interesting and disturbing at the same time without further Ado let's Jump Right In starting with lumei which is a great tool that allows you to 3D capture any type of object in the real world making it available to use as a 3D mesh in any software of your choice what makes this one special compared to the classic photo scan methods is that it uses AI in order to create Reflections on object surfaces this way you will not only get the right Reflections but also a more accurate 3D mesh to be more specific Luma AI is an app and service developed by Luma labs and what I personally find interesting is that the AI they are using is a technique called neural Radiance fields or Nerf in short and from what I can see it will make traditional 3D scanning and photogrammetry absolutely soon it is similar to the ray tracing technique used in high-end gaming to operate realistic Graphics Nerf has been used primarily in research facilities but it is now being explored more widely due to the recent increase in II image generation the Luma AI app makes it easy for anyone to use Nerf and the entire process can be managed from an iPhone this app is compatible with iPhones as old as iPhone 11 and will eventually be available to Android and in the web versions as well to use the app you can circle around an object at three different heights and the app guides you through the process using augmented reality AR overlay how it works is the amp processes the images on lumenlab servers and produces a 3D model of the object that can be viewed in different forms including a generated video an interactive version that can be rotated and a presentation of the object that can be pivoted and zoomed in on Luma Labs is constantly updating the app and service and has recently added features such as the ability to upload video for processing and ability to add annotation to the 3D model recently they released a trailer of their attacks to 3D mesh AI that I called imagine 3D which is in Alpha right now but you can check the result on their website it where they showcase the 3D models and the prompts enter to generate them and it is just mind-blowing next we're going to take a look at openai's newest trained model released yet which is also a text-based 3D model generator called Point e but this one is open source and the code is available on GitHub for anyone to use so Point e is an artificial intelligence system that can generate 3D models based on text prompts it works by first using open ai's text to image model down A2 to generate a synthetic rendered image based on the text prompt and then feeding that image to an image to 3D model to generate a color 3D Point Cloud Point clouds are composed of discrete data points in space that represent the 3D shape but they do not include detailed shape or Texture information about the object to address this limitation the port a team also trained an additional AI system to convert the point clouds to meshes which are commonly used in 3D modeling and design the point e system is faster than previous state-of-the-art techniques but it is not perfect and can sometimes produce shapes that do not match the tax prompt which is understandable for now because this is just the beginning potential applications for Point e include 3D printing and using game and animation development workflows to be honest this is exciting and scary at the same time Additionally the generated Point Cloud can serve as a starting point for artists to refine and improve upon making their workflow more efficient while Point Ace models are not currently as accurate as traditional techniques they can be produced in a fraction of the time potentially making it available 204 artists and designers working in fields such as film TV interior design architecture and more the next one is called epipusd which is a machine learning model for generating 3D reconstructions of human bodies from a single image it was developed by researchers at reality Labs at Mana or Facebook and the University of Southern California and it was published in a paper in the proceedings of the IEEE conference on computer vision and pattern recognition in 2021. beforehd can be a useful tool for 3D artists in a number of ways as a machine learning model it uses a combination of neural networks and optimization techniques to reconstruct the 3D geometry and texture a process body from a single 2D image which is really interesting saving you time and effort of manually creating 3D models from scratch and this hopefully will get even better because Mana is actively working on their metaverse the 3D models produced by pivusd are relatively of high quality with detailed skin hair and clothing this can make it easy easier for you to create high quality 3D models of people quickly and efficiently without having to spend as much time modeling and texturing individual details Additionally the models are produced by beef USD can be used as a starting point for 3D sculpting in addition to other stuff for example you can use beef USD model as a base mesh and then add additional details using sculpting software such as zbrush or blender even though this is just in the early stages and it is not being used at the Practical level yet it can potentially speed up the overall modeling process and allow you to focus on more creative aspects of your work in the future it also has the potential to greatly improve the efficiency and accuracy of 3D character modeling and animation particularly in fields such as VR and AR especially knowing that meta is currently working on their metaphors and they already have the Oculus VR headsets which means they probably have big plans when it comes to this technology now we're gonna talk about another research paper which was published by Nvidia research they developed an AI model called get3d it generates 3D shapes using only 2D images as input the model can create a wide range of 3D objects including buildings Vehicles characters and animals and can generate up to 20 3D shapes per second when running on a single Nvidia GPU which is just impressive the shapes are produced in the same format as popular 3D software allowing you to import them to a 3D software or game engines for further editing and Improvement the model was trained on a synthetic data consisting of 2D images of 3D shapes taken from different camera angles generally speaking and what I think get 3D is intended to make it easier to populate Virtual Worlds with 3D objects and could be used in game development robotics architecture and social media from what I can see all the big companies are now trying to be first when it comes to AI whether me to the art 3D modeling animation or creating videos text and stuff and it feels like it is this is the 90s again which is exciting anyways the 3D mesh is generated by get 3D are also fully textured and using another AI tool for NVIDIA research called Stan gunnata they were able to guide the style of the models I mean the model's texture is using text prompts only for example a rendered car could be modified to become a burnt car or taxi or a regular house can be turned into a haunted one as they Showcase in their presentation video everything is available and well documented on their GitHub and you could take a look at it if you are interested finally I'm from Google this time we have another texture 3D artificial intelligence called dream Fusion which is a tool that uses a combination of nerflight models like the one used by Loom III and the function called score distillation sampling or SDS for short to generate through the objects and scenes based on user provided text prompts SDS is a way of minimizing the difference between a group of gaussian distribution and the score functions learned by a pre-trained model allowing for the creation of differentiable image parametization through optimization it's always this process of turning tags to 2D image using generator models like imagine or stable fusion and then converting the resulting image to a 3D mesh this combination of techniques allows dream Fusion to generate 3D objects and seas that are built of high quality and coherent with the user's text prompt which is potentially very helpful for 3D artists as it allows you to quickly create 3D models based on your own ideas rather than building them manually by hand there are some differences in the code available on GitHub as the folks from Google research use the Imagine AI text to image model in the paper but since it's not available for the public they use the stable diffusion as it is open source which may lead to a slightly different results if you were to play around with a collab notebook if you are interested in these services and tools you will find the necessary links in the description I hope you found this video useful if you did please give it a thumbs up you can also check some of our previous videos thank you very much for watching and I'll see you in the next one foreign
Info
Channel: InspirationTuts
Views: 75,476
Rating: undefined out of 5
Keywords: Top AI tools for 3D Modeling, top ai tools for 3d modeling and rendering, top ai tools for 3d modeling 2022, top ai tools for 3d modeling, top ai tools for 3d modeling and animation, top ai tools for 3d modeling blender, ai apps, ai tools, ai for modeling, ai for 3d modeling, best ai tools for 3d modeling, blender, blender 3d
Id: z-alDjdVFxk
Channel Id: undefined
Length: 9min 35sec (575 seconds)
Published: Thu Feb 02 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.