Top Free AI Model Generators Tested for 3D Printing

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
AI is definitely the word of 2024 and it's finding its way into every nook and cranny of Our Lives from hijacking the Pope's wardrobe and Drake's vocal cords to mastering protein folding and developing new cancer treatments one area that's not yet gone fully mainstream is 3D model generation however whilst you can't yet just throw up chat GPT and request a 3D principal model of Will Smith eating spaghetti there are a number of Innovations already out there which create 3D printable design using AI I've been testing and experimenting with a number of these more interesting 3D AIS and in this video I'm going to be going through what each of them does and how they perform when paired with 3D printing so let's have a [Music] look there's several techniques Technologies and platforms that I'll be looking at here but everything I cover in this video is currently available to use for free either in its entirety or at least on a trial basis some use AI to generate 3D models from scratch and some use AI to create 3D models of Real World objects without the need of a 3D scanner or photogrammetry the first one I'd like to talk about is a pairing that I've been experimenting with for a while llms or large language models such as chat GPT Claude and co-pilot have been a key part of the growth of mainstream AI over the last year or two large language models are exactly that they are focused on language and most can't interpret other mediums such as images or audio on their own though many llms do pair with a generative image AI model to provide the services behind the scenes such as Dolly 3 used by chat GPT there's a piece of CAD software called op scad which has been around for years and is particularly popular in designing parametric models where specific sizes and shapes can be easily adjusted to make the model fit a specific use case with open scad rather than designing your model by manipulating shapes in a graphical interface you write a script to determine what shapes you want what their sizes should be where they should be placed and how they should interact and then it will create what you've scripted and so I wondered what would happen if I asked an llm to write a script which would design a 3D model and then ran that in op scad as llms tackle coding problems in slightly different ways I thought I would test this with three different ones chat GPT by open AI Claude by anthropic and Microsoft's co-pilot previously called Bing chat first up we've got chat GPT and as I'm using free versions of all of these this is chat GPT 3.5 I want to request something fairly straightforward for now with simple geometry so let's ask chat GPT to write me a script for use in op scad that designs a space rocket as you can see it's now writing the code out for me and then down at the bottom it's included a short description of what it's produced this script creates a simple rocket with a cylindrical body and a conical nose cone along with three fins evenly spaced around the body evidently it has a good understanding of the core components that make up a rocket ship shape and it's nice to see that it's made the dimensions of each component variables so without delving into the code you can just change the numbers at the top here to change the size of each of these components now let's get this code copied and we'll see what op scad makes of it so here in op scad we have our graphical view in the middle here and to the left we have our script box if I just paste the code that chat GPT generated for us in here I can then hit this button and it will run the script for me well we certainly have our fuser Lo and what looks to be three fins featuring more detail than I was expecting but just floating in midair and that's it down here in the console you can see that a warning has been highlighted this is because it's requested to make a cone in the script but to create a cone in open sced you need to request a cylinder then manipulate it into a cone so let's get that changed and run the code again well the error disappeared so the code should have all run but we still haven't got a nose cone it looks like the cone has been subtracted from the bottom of the Fus Lo to create an exhaust rather than becoming a nose cone well that didn't go particularly well so let's now see how Microsoft's co-pilot does with the same request it's done similar to chat jpt generating the code and following up with a simple description let's copy the code and see how this fares well that's definitely worse than chat GPT and seems to have only produced a column it's obviously prompting other objects in the code so my assumption is without analyzing the code that it's just positioning all the other shapes inside each other okay well third time's a charm so let's see how Claude does at making a space rocket well the first thing I'd note here is that it's written far less code than either of the other two whilst the description text is probably twice as long let's see if this generates anything for us get this copied in here and whoa well that's still unusable but it's considerably better than either of the others we have a few four fins and a Launchpad all positioned in the correct places ah and you can see that Claude has done the same thing as chat GPT and requested a cone rather than a cylinder let's get that changed up and see if it does anything not only did it generate a nose cone but actually placed it in the correct Place well none of those went well although Claude did evidently have a better understanding of where the shapes should be positioned at least let's give these llms one more chance using something they should have have more training data on and should hopefully be using simpler shapes we'll start with chat GPT again and this time we're going to ask it to write me a scripts for use in op scad that designs a chair we'll grab this code and paste it into op scad well I guess technically all the components are there but again it positioning is all off all right well let's try co-pilot we'll paste it in here click run and oh oh okay that's not gone great I thought it had nailed it for a second but obviously that is very much not a chair okay well last chance for this llm open scatter pairing let's see how the winner of the rocket round Claude deals with a chair o yeah I feel it's still did better than the other two but not by much though I guess if you turn that upside down you basically have a stall that I'd happily sit on well even with claude's better sense of spatial awareness and even if you had a good understanding of how to script in open scad so you could make tweaks yourself this is clearly not The Cutting Edge method of producing brilliant designs for 3D printing that we're after llms and open sced are both fantastic in their own right but for now at least this pairing is not a 3D Revolution luckily there have been much more exciting Innovations in this area and this next tool that we're looking at is much closer to what we're after mesh. has a range of tools but the specific one we're looking at right now is text to 3D this allows you to enter a simple text prompt much like youd use in an AI image generator such as mid Journey Dolly 3 or idiogram and it will then generate a 3D model based on your request mesh. a currently offers you 200 free credits which allows you to generate and enhance roughly eight designs and then if you want to continue using it you can purchase additional credits once logged in we want to go to text to 3D on the left hand side currently you can select between whether your using the alpha or beta model and for now I'm running it in beta so click Text to 3D and we're brought to the generation page in the prompt box at the top left here you can simply type out what you like meshy to generate so let's request a rocket again and we'll see how it compares to one from an llm and open scad once you're happy with your prompts you can also answer a negative prompts if there are things that you explicitly want it to avoid including I won't bother with this here but it's a useful feature to have below that you can choose an art style you can set whether you'd like the design it generates to be realistic cartoon-esque low poly or leave it to automatically decide which style is most appropriate for the prompt the last option you have is regarding the seed a seed is basically just a big number because true randomization in software isn't possible a seed number is used and all calculations are worked around that number to create a a unique outcome by default you'll always get a randomly generated seed here but you can opt to use a fixed seed if you wanted to run the same prompts again making slight changes but getting a similar outcome for our purposes here we don't need to do this so I'll leave this unticked and hit generate down at the bottom generating from a prompt costs five credits and the time it takes tends to vary depending on how busy their server is at the time I've had it take around 1 minute before and also had it take around 20 minutes other times so it really can vary once it's finished processing it will give you four different draft designs you can select each of these drafts to look over it in more detail once you found the draft you like you can hit the Cog symbol next to it and you can then select the texture richness as we are going to be 3D printing the model we don't need the texture so we can just select none and then hit refine this will now refine the draft you selected into a better designed final model this process can take a bit longer than it took to create the original draft but usually within 5 to 15 minutes in my experience once the process is finished the refined version will appear in the refined section if you hover over it and click preview it'll open up in the 3D View and you can now see it's a much smoother design than the draft you can then click the arrow button here select your desired format for most people here the they'll likely want STL and then you can download it clearly Mesi has produced a much better rocket than any of the llms so let's request something a little more unique and even functional and we'll throw in something that we all know that AI can find a little challenging let's prompt a mobile phone stand shaped like a human hand we'll switch the art style to realistic and hit generate and it's given me four new drafts to look over this one looks quite good I was more Thinking With My prompt that the whole thing would be a hand holding the phone but this also works and you can see the two prongs at the front for propping up the base of the phone this one seems to have more completely ignored the human hand element entirely but is more of a standard phone stand this one more than any of these has fallen into our trap of human hands and ai's inability to fully understand them and then finally there's this one which is similar to the first and arguably has a nicer hand design but is missing one of the prongs at the front to hold the phone up of the four I think I like this one the best yes it's only got four fingers but hey if I 3D print this technically it'll be more real Than The Simpsons and they've only got four fingers and also has both of the prongs at the front so let's get this one refined obviously I sped that up but for reference that refinement took pretty much 6 minutes to the second now let's have a look at it I quite like this design the base is a bit funky ke and we may need to do something about that but this looks like it'll be a good functional phone stand so let's get the STL downloaded as I import it you can see it's been scaled particularly small which makes sense as I never gave mesy a reference of scale so let's get this scaled up now my phone is around 80 mm wide so I want these prongs at the front to be a little smaller than that to ensure that my phone fits let's select the scale tool and keeping it set to uniform scale change the Y Dimensions to 2 65 mm well that looks great now to fix this bumpy uneven base we'll just use the cut tool I'll drag this down to a height that looks good to be the base of the stand and then you can see here at the back there's a bit of a dip so I'll drop this plane so it doesn't cut through that I'll untick object b as we don't want to keep the lower half after this cut and then click perform cut I'm happy with this so let's hit print and see how it comes out well I'm pretty happy how they came out and is actually blowing my mind a bit this entire thing was designed by AI which is amazing but what's more is it actually works not only does it fit my phone perfectly which I know is helped by my scaling it in the slicer but the design is even holding it at a good and suitable angle for a phone stand the bottom prongs ensure the phone isn't going to slip or fall but aren't covering up the screen and the whole stand is solid and stable and isn't just going to to tip over under the weight of the phone mesh. is currently primarily focused on producing designs for use as digital assets with a big focus on Textures and less so on real world applications however they're definitely aware of its potential for use in 3D printing and I have no doubt that we'll see this area of their service develop very quickly another text to 3D AI platform is tripo available at tripo 3d. it offers a free plan of 600 credits a month and then you can either top up the credits pay as you goale at 1 cent each or you can subscribe to a Premium plan for a bigger monthly allowance with tripo there are less features for you to choose from and it pretty much gives you a prompt box there is an image upload feature here which I won't go into today with tripo but we'll be looking at a similar feature on another platform later on in this video for now we can just start typing as I type you can see the objects on display update to what I've typed in real time these aren't being generated live based on my prompt but are objects that other users have generated previously with similar words in their prompts once you hit create it will then generate a new model based on your prompt as with mesy tripo gives you four designs to choose from and you can then hit refine to create a better quality one let's skip past this progress bar now this rocket looks really impressive once you're happy with it you can then select STL as the format down at the bottom and then hit download this will then generate the STL mesh and takes 30 seconds to a few minutes that rocket is really impressive and I'm particularly impressed with the level of detail that tripo included in the mesh of the smoke around the base however I then decided to try and get it to generate a phone stand similar to what I made with meshy however it just kept insisting on including a phone in the model no matter what I used in The Prompt I then tried prompting for a bow-shaped like a human hand and it just gave me bows with hands sticking out the middle so tripo seems to struggle understanding if two different objects are mentioned in the same prompt however looking at the models on their dashboard tripo seems to be very focused on character models and doesn't seem to have an issue generating known figures so I thought I'd see how it would cope with the request Iron Man in the Lotus position it actually came out pretty well with the exception of having an extra foot coming out his backside and another one in the middle of his ankles however when I exported the STL and imported into a slicer it becomes evident of how much of this detail was completely reliant on the texture rather than being physically modeled in the mesh tripo is obviously a really powerful tool and definitely worth going and playing around with however even though the rocket It produced was really impressive it doesn't seem to have the best grasp of the human language which can affect how it interprets your prompts this is why I really struggled to get it to produce a phone stand for me that didn't include a phone in the model Luma Labs also known as Luma AI offers a similar text to 3D Service as tripo with similar results in terms of quality and prompt accuracy though it does seem to be far quicker at generating and upscaling Designs however it also has another impressive tool in its Arsenal not only can it generate 3D models from scratch using text to 3D it can also convert a video to a 3D model using a Nerf or neural Radiance field sort of similar to photogrammetry where you take photos of an object from every angle and run it through an algorithm which then finds matching points from different photos and creates a point cloud with a Nerf you record a video of an object you want to record three to four loops around your object adjusting the height and angle of the camera on each rotation to make sure that You' looked at every side of your object from every angle with Luma Labs you can either download their app available on Android or iOS or you can upload your video directly to their website at lumabs doai here you can see Luma laabs is training their neural network on the contents of the video so it can then reconstruct the contents in 3D it can take up to half an hour to process and once it has you can then view it directly on the app or on the website when you load it it does this cool transition moving from a point Cloud to text texing the object and then finally texturing the environment you can then move around and view the model from every angle zooming in and out and clicking on different points in 3D space to change the point that your camera rotates around if you click up here it will give you a range of different download options you can download it as an object of which you can choose from three different file formats you can download it as a scene either as a point cloud or full mesh or you can download it as a 360° image those last two are more for if you've used it to scan a room or environment rather than an object and for us with 3D printing you'll most likely going to want to download it as an obj so with that downloaded let's take a look at it in a slicer well obviously it's still including the pole I had it mounted on but that's easy enough to remove right inside the slicer we'll use the cut tool to cut most of it off but there's still a bit sticking out the bottom of the neck looking a bit like the stub of a spine to get rid of that we can just add a Parts we'll add a cube and then we'll move scale and rotate it until it's covering up the part we want to remove then we can use the mesh Boolean tool use the difference mode select the head and the cube confirm and it's remove the extension at the bottom of the head if you'd like to learn more about the mesh Boolean tool I've already released a tutorial on it on my channel now obviously there's very little flat surface at the bottom of this model and a ton of overhangs but if we just slap some tree support material that should be enough to handle it pretty well and after a quick session with the pain tool let's get this sent to print well that's pretty incredible it's obviously not perfect and it's far from being millimeter accurate but it seems a remarkable option if you just wanted to quickly make a fairly good aesthetic copy of something without being a 3D designer or having a 3D scanner now the concept of uploading a video and converting into a 3D model may sound quite similar to the bamboo lab AI scanner which I covered in a video a few weeks ago and the performance of that was to put it mildly appalling but I thought I'd give it another try so I recorded another video of this same subject with the same lighting and in the same conditions as when I recorded the video for this one so let's see how I dealt with that once recorded I uploaded the video and after a while it highlighted the subject it thinks I want to scam that looks good to me so let's continue the process of this took a good hour or so but let's see how it came out well the preview isn't filling me with confidence but let's get it downloaded and have a proper look wow it's quite remarkable that both this and the Luma Labs one were both produced from effectively the same video this is obviously completely unusable but it is to be fair listed as an experimental feature and I have no doubt it will be improved soon Luma lab's video to 3D is fantastic if you've got access to the thing that you want a model of but what if you've just seen something online or you grabbed a photo on your phone phone but you're not able to go walking around it taking a video instant mesh hosted on the AI superhub hugging face allows you to upload a single 2D image and convert that into a 3D model now you could go completely AI generated and use something like mid journey to create the initial image for this but for this example I'm going to use an actual photo instant mesh is currently completely free to use and I'll make sure to include a link to all of these tools in the video description let's have a go at running a photo of the recently retired Atlas HD humanoid robot from Boston Dynamics if the image is like this and has a detailed background you can tick remove background and when you click generate it will then try to determine what the subject is in the image and remove the background you can see here it's done an okay job but it's failed to remove some background areas which could affect how the model ends up coming out once it's isolated the subject it then immediately begins processing it and will then create a number of images of that object showing how it thinks it looks from different angles it then uses these multi view images to generate a 3D model which you can then look at here if you like the model you can then download it either as an obj or as a glb as with the designs we looked at earlier there's no reference of scale for the object generation so we need to scale this to what we want and we'll get it standing upright well the general en shape is there but there's a lot of issues with this design not helped by the fact it didn't properly remove the background let's try another photo of Atlas HD this time one that is already on a perfectly white background as there is no background to remove we can untick remove background and then just hit generate okay well the multiv views are looking pretty good fingers crossed for the model oh the the model looks a bit cleaner but the wires and cables are broken and look messy let's down it and see how this looks the general shape of this does look better than when we had the background Auto removed but there's some funkiness going on with one of the arms and the cables are all broken let's try one last image will we use Boston dynamic's other lovable dancing robot spot there is a background to this one but it's very simple also there's no wires or cables to worry about so let's see how it does well it's removed the background without any issue so that's good and the multi view images look impressively accurate okay well the texture looks pretty messy but the mesh which is what we care about for 3D printing looks fairly good let's get this imported into a slicer well there's obviously some inaccuracies such as the middle hinge of the arm and it's all a bit wobbly there's no straight lines but that can kind of be expected when it's coming from a photo it's impressive that it does have all four feet starting at the exact same layer so it's not made one leg slightly longer than the others however for printing it would be nice to have a slightly larger footprint so I may cut the very bottom of these feet off let's give it a quick paint job and see how it looks I actually really love this little guy no he's not perfect no he's not going to win any Design Awards and yes there's lots of problem areas and inaccuracies but it's more than good enough for a kid's toy or a model to represent the robot sat on a window still or a desk what really blows my mind is that there was absolutely no design work at all from my part in producing and printing this thing which I now hold in my hands yes the horse's head had a lot more detail and a lot more accuracy than this but that makes sense because of the amount of information it had to go on but this was literally produced using a single 2D image and that phone stand earlier was produced with nothing more than a text prompt these are just a handful of the AI tools currently available to use for free which can be used to generate models for 3D printing whilst some of them do allow you to download the models as stls and objs with the exception of the bamboo lab AI scanner which currently doesn't work none of them are designed specifically with 3D printing in mind taking into account things like minimizing overhangs as a result even when you do find a model that you like you may need to make some simple tweaks to it in the slicer like I did with these as these tools develop it would be great to be able to add things to The Prompt such as tolerances and sizes for dimensional accuracy as well as it having a better understanding of rigid design for smoother less bumpy surfaces even in their current state not only are these tools fun to play with but in the right conditions they can be genuinely useful as I mentioned earlier I'll pop a link to all the tools that I've covered here in the video description if there's a tool that I've covered here that you'd like a more in-depth tutorial on or if there's an AI tool that you think is great for 3D printing which I've not covered make sure you let me know in the comments below and I can cover them in a future video if you found this video useful and you've learned something new please make sure you hit the like button and while you're down there hit subscribe to make sure you get more 3D rev goodness in your feed in the future as always thanks very much everyone and until next time happy printing I'm genuinely really impressed with the quality and performance of some of these tools and I'll continue to cover more AI 3D printing tools in the future I've also got a minseries on different methods of 3D scanning coming very soon thanks very much for sticking around if you'd like to support me and my channel and get yourself a load of bonus 3D Revolution goodies you can now become a channel member by hting the join button below my video and becoming a 3D revolutionary like these fantastic people below if not why not check out one of my other videos learn something new or just have some fun either way thanks very much everyone and until next time happy printing
Info
Channel: 3D Revolution
Views: 87,040
Rating: undefined out of 5
Keywords: 3D printing, 3d printers, AI, artificial inteligence, cad, design, 3d model, 3d, LLM, chatgpt, gpt, claude, copilot, openscad, ai design, generative, generative 3D, 3D AI, mesy, tripo, tripo3d, lumalabs, lumaai, bambu, bambu studio, prusaslicer, makerlabs, ai scanner, instant mesh, instantmesh, huggingface, boston dynamics, atlas, atlashd, spot, horse, gas mask, ai hand
Id: GVd5vcMDfVI
Channel Id: undefined
Length: 26min 42sec (1602 seconds)
Published: Fri May 03 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.