Midjourney Prompt Tips: Codes & Tricks - WHAT STILL WORKS? (Perfect for AI Art Beginners!)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
c-speed Journey's come out we've all been using a bunch of these different codes to take more control over our app the problem is they don't all work anymore version 4 has come out and certain algorithms have certain codes at work while some don't so I'm going to go over which codes currently work which ones don't so you can know exactly what's available to you when generating art on YouTube now in order to see what algorithms are available so you know what I'm referring to we're going to go to slash settings in Discord and take a look so you can see here we've got seven algorithms which is versions one to four each one getting more advanced as they go and then niji mode which is more of an anime style mode that allows you to produce uh more illustration and stylistic images that are sort of closer to an anime or Japanese animated style then you've got MJ test MJ test photo these are two test algorithms that were released between version three and four which are also very high quality four is the highest quality at the moment however the test algorithms produce high resolution images what we're going to do now is touch on which codes work for every single algorithm so you know which ones are safe to use straight off the bat now we're going to start with some codes and the way that codes work is you type in your prompt such as a dog on a skateboard and then you add the code to the end and the first one we're going to look at is aspect ratio if I type in dash dash AR if I type in one to one I get a one-to-one image if I type in three to two again image there's three units wired by two units tall now this works on every algorithm but there are different limitations to what actual aspect ratios you can get if I come back to this image here in the mid Journey documentation you can see the different algorithms and the shapes they produce and then of course you can switch the numbers around so instead of having four to seven and producing a tall image you can have seven to four and produce a wider image but there are actually different aspect ratios available for each different algorithm for inversion four and the niji models we can go up to two to one or one to two depending on which orientation you want whereas with versions one two and three we can go five to two or two to five Max maximum whereas the test algorithms which is test and test P we can do three to two or two to three now the next code that works with absolutely everything is chaos so if I type in Imagine and start my prompt this time it might be a cat with sunglasses I type in dash dash chaos and the way it works is if I give it a chaos of zero you'll notice that all the images look pretty similar because what chaos does is it creates variance between the four images and the grid so with zero there is no chaos applied so they all have the same look but now if I paste the exact same prompt in and give it a chaos of 100 you'll notice that the images are actually a lot more different from each other so if you're looking for a lot more variety in your image grid try bumping that chaos up to 100 or somewhere between 0 and 100 to play with that and see what you can get the next command that works on everything is dash dash no which is negative words so once again I'm going to type in the same prompt cat with sunglasses I'm going to type in dash dash no at the end I'm gonna put Sun you'll notice there is no instance of the Sun in these images and it's actually sort of like a bit more Starry sort of look to it but if I don't want to include the sun in there then I simply put dash dash no and whatever words I place there it de-emphasizes those words and while it's not a perfect command it is great for taking more control over your prompts stop is the next code that works with absolutely everything so if I type in an image of say [Music] a carnival and I say dash dash stop at 10 it stops rendering at 10 percent of the image which is actually very interesting so now if I do the exact same prompt again with 100 it produces 100 of the render so you can go somewhere in the middle with this too you can type in Imagine stop at 60. and it stops at 60 percent which is a little ugly but it's still a cool tool to play with the next one that works with all algorithms is seed so let's say I wanted to produce this image this is only 60 rendered if you're watching the last tip but uh if I actually click on my reaction and add the envelope if you're not sure you can just search for the envelope here just type in envelope but I simply react with an envelope and it brings up some information about my prompt including the images and it also brings up this seed here so so if I copy this seed I can type in my prompt which is imagine a carnival it doesn't have to be the same prompt but in this case we will use the same prompt I type in dash dash seed and I paste that number in there and hit enter so this was our previous image with the seed that we copied and this is the new one with the exact same seed you can see it's extremely similar because what C does is produce the same sort of noise or pattern whatever it is that mid Journey starts with it starts with that exact same information so therefore you tend to get the same images so by using C you can actually just adjust your results a little more closely by using the same starting point when you re-enter that prompt so now we're going to explore the codes that don't work with every algorithm I'm going to start with same seed which works on version one two three as well as test and test P however the results with test and test B I haven't found to be as consistent as versions one two and three so if I type in Imagine I put my prompt in there and instead of C if I type in same seed and I include a number now this number is 10 digits so if all I do is simply type in 10 digits I need to make sure I don't use version 4 or niji or it won't work so type in dash dash V Space 3 to use version three you'll notice that each image looks pretty much exactly the same because same seed has the same seed number for every image in this picture so if I actually remove if I take this exact same prompt if I paste it in and I just make the seed the same instead of same seed it now has a bit more variation as opposed to before when we use same seed which basically looks the same from image to image with a few minor differences next is Dash Dash style now with version 4 you can now type in Dash Dash style for a 4B or 4C 4C is Now the default what happens if we type in 4A or 4B let's start with 4A and also try also try for B and of course for C and so this is 4C which produces uh some of them are a little more photorealistic and a little bit more sort of dramatic in this instance 4B does look kind of similar but it is still a different algorithm so it'd be good to test that a little bit further however switch to 4A and what we get with 4A is more colorful and sort of illustrative so that is Dash Dash style 4C 4B or 4A the next code is image weight or dash dash IW which only works with versions one to three or test and test P so the way it works is if I take this image and copy the URL I can go in imagine paste the URL in there and type a prompt and maybe this time I say cat clown with red eyes or something different and I type in dash dash i w for image weight and I give the image weight one which is the default or up to five and what that does is it weights the image of priority with the image URL so if I give it five the image is vastly more important when mid journey is processing The Prompt it will rely on it more to produce the image if I make the image wait one then it will actually rely on it a lot less but this doesn't work with version four or with niji so I'm going to type in dash dash version three to see what results we get so if you look at the original image and the image produced it has kept a similar layout with the face in the center taking up most of the frame it's really heavily relied on the image for information about what to generate so this time if I go to imagine and type in the same thing but I change the image weight to one still making sure I'm using version 3 or version one or test not version four not the default algorithm you see it's relied a little less on the image but the image is still a part of that generation obviously when the image is included it's going to use it to some degree but backing that image way back down gives you a prompt a bit more command over the final image the next code is dash dash video which works on versions one two and three as well as the test algorithms does not work with the current version for default algorithm so we simply type in Imagine in our prompt so we can type in there fast car I'll go dash dash video and because I can't use the default algorithm I'm going to go version three and hit enter once our images have been generated we add a reaction again of the envelope so we can search for the envelope up here react with it and it produces another message and if I can see here an address to an mp4 file you can download and also a version of the video right here that you can watch now one thing to keep in mind if I go back up the top here and I upscale one of these images I now have my upscaled image if I react the envelope again it now produces just a single image but keep in mind this feature can be a little buggy and I've found that sometimes it simply doesn't work simply try again later if it doesn't otherwise it can be cool when it does to produce Nifty little videos like this the next command we're going to look at is quality which works with versions one two three and four and the niji algorithm so this time if I type imagine human face where you can type in dash dash queue for quality and one is the default but if I put in 0.25 we get an image which renders in a quarter of the time since 0.25 is a quarter of one but we can also boost it up to two so if I take the exact same prompt and pop in two it spends a bit more time on the initial generation creating that image but not necessarily the upscale so it spends more time sort of perfecting some of the detail and getting something better the highest number you can use with quality is five and it'll actually take five times as long to render but gives you more detail and quality in your prompt it works pretty well and produces again more detailed results the next code that doesn't work on every algorithm is stylize but it works on every algorithm except for the niji algorithm but has different levels so if I type in Imagine and I say a funny dog on a unicycle I type in dash dash stylize and I give it zero we see we have a funny dog stylizer set to zero so it's taken the prompt very directly and tried to do it exactly as I mentioned it but what happens if I take that exact same prompt but I set the stylize to 1000. mid-journey has actually stylized the image more in in a way that it prefers so instead of relying too directly on my prompt stylizes allowed it to have a little bit more freedom in its interpretation and although it is still much the same it is a great way that if things are looking a little bit too sort of uniform or clean cut you can spread them out a little with stylize and to give you a quick rundown of stylize there are different ranges version 4 has zero to one thousand while versions should cover versions one to three but version three 625 to 60 000 while a test and test p is 1250 to 5000 and you can see here the default which is 100 for version four so anything below or above 100 will change the default result 2500 for the older versions and 2500 for test PC you can see what the default settings are and how far you can vary it away from that default stylized rate the next code is dash dash creative which you can only use with that with the test and the 10 SP models so that works is say I have this fast car here which is a dash test you can see how it looks it's uh pretty straightforward right but if I type in Imagine touch but then dash dash creative I give mid-journey more creativity again in the test algorithm so in granted mid-journey more creativity and you can see especially on the Right image the extra color and vibrancy it's added and the way it stylized things a little more uh with that creative code which is a pretty handy tool when using the test or test P algorithms next is dash dash tile so if I type in my prompt and I say dash dash tile this doesn't work with the current default version 4 or niji but I have had it worked with versions three and the test algorithm so I'll type in dash dash test and show you what results we get now first glance this doesn't look like anything special but what this does is create image which can be tiled repetitively and it will seamlessly blend together so if I upscale this first image we now take right click and we take this link address and we jump onto a website where we can test out the tile we hit enter and you can see how it scales and repeats now I can change the width here to enlarge and decrease but you can see ultimately it seamlessly repeats and that whiz is a very handy tool especially if you're a graphic designer or someone creating patterns you can use on the back of something the tile feature is incredibly powerful and you can even use it to create faces and things like that do a video on that in my uh on my channel so check that out if you're interested the next we're going to look at is upscaling now in every algorithm you can upscale otherwise you've got small images now the initial image you get is 512 by 512 pixels but you'll get different results depending on the algorithm and what options you have so with this here straight away we've got our standard upscales here but we can change that default upscale by going to slash settings and or when you finish upscaling one of the images you can choose a light upscale redo a beta upscale redo or like I said if you're going to slash settings and by default we can have a regular upscale a light upscale or the B to upscale depending on what we choose but it doesn't stop there if I take this version 3 image which I upscaled earlier which was number three I can also upscale to Max and I can also remaster which simply produces the same image in a one of the test algorithms but we have so many different upscale options and they're all different depending on the version that we're using so if I head over to the mid Journey documentation to show you what I mean so we get a little bit of information here and if I'm using version 4 which is the current default algorithm you'll notice what we get is 512 by 512 is a starting grid size whenever we upscale we get 1024 by 1024 until whether it's a regular upscale or a light up scale if we do the beta upscale we get 2048 by 2048 however this is switching to a different sort of algorithm and the results I've found to be not quite as good and the light upscale what it does is it produces in an upscale which has less detail whereas the default upscaler is simply adding as much detail as it can the beta upscaler is actually increasing the resolution of 2048 by 2048 but not actually bringing much more detail to it as well there's also a niji upscaler so if you're using the niji model which is down here there's 1024 by 1024 as well so you'll notice all of the images you upscale with version 4 and niji and even versions one to three uh 1024 by 1024 by default until you go to versions one to three and you Max upscale the max upscaler is actually 16 64 by 1664. keep in mind changing the aspect ratio does change this as well but this little table here will give you an idea now if you want to get a much higher resolution using the test algorithms will automatically produce 2048 by 2048 images however I recommend just using your standard upscale and if you have the money invest in the topaz gigap pixel upscaler because you can produce images that are up to six times bigger than normal and it uses AI to add detail and enlarge and you can see the results here are pretty insane and I'll pop a link in the description to a video so you can check that out coming back to Mid Jenny we've got a few different commands you can pop in there first one is slash settings I'll type in slash settings we can change default settings on Mid Journey so that way we don't have to keep on typing in whatever it is we need but we can also move on to prefer suffix I'll type in slash prefer suffix space and it asks me to put in a new value and what this means if I type in dash dash a r two to one it will now automatically add that to my props so if I type in Imagine a hairy dog and hit hit enter you'll see it has automatically added AR two to one if I want to remove that suffix I type in prefer suffix and I hit enter and enter again with nothing there and the suffix is now removed now I can also set up custom codes by typing in slash prefer option set I give the option set a name maybe I'll call it art 2. although I click just outside this box type in value colon and then I can give it some custom commands such as dash dash AR I can give it a whole bunch of values because I have art 2 here I hit enter and now when I create my next prompt I type in dash dash R2 and you'll notice it's added my options to the end of The Prompt which is pretty cool and now if I want to remove that I go slash prefer option Set uh two hit enter and you'll see the custom option is now removed you can also see which prefer options are available you can go to prefer option list hit enter and it'll show I have wallpaper and YT as as options I can use the next feature you can use is multi-props which work with versions one two three four and niji but not the test modes I type in Imagine I've done a video on this recently but I'll type in Imagine hot dog you see here my prompt is hot dog and I've gotten pictures of mostly of hot dogs but what if I wanted a dog that was simply hot I typed in slash imagine hot colon colon dog and that will separate the two words and there you can now you can see we have hot with two colons and dog and the prompt has been interpreted completely differently because it sees the words hot and dog as two different things as opposed to combining them into one phrase hot dog so that's actually a very cool feature for taking more control over your prompts but you can now add actually add numbers so what this becomes what is known as prompt weight or word weight I can actually go in type in Imagine I type in hot colon colon 5. and dog colon colon one so what this has done is made the word hot five times more important than the word dog because by using numbers from one to five or even zero I can actually emphasize some words more than others so by this is called prompt weight or word weight so let's see what it does now you see it's emphasized the word hot with heat and maybe a little controversial but you get the idea the word the dog is almost absent because it has been de-emphasized in this prompt because I gave it such a low word weight so I heard this cleared things up for you as to what codes and commands are really useful with mid Journey with the current and old algorithms so if you found that useful please consider giving the video a like otherwise if you want more mid Journey content head to my channel check it out I've got a ton of videos on there otherwise I hope you have a great day I'll see you next time
Info
Channel: Wade McMaster
Views: 89,583
Rating: undefined out of 5
Keywords: midjourney codes, ai art, midjourney, midjourney tips, how to use midjourney, midjourney tips tricks, midjourney prompt tips, midjourney ai, midjourney ai tutorial, mid journey ai, ai art generator, midjourney guide
Id: _Oy6Vwt-hiE
Channel Id: undefined
Length: 21min 59sec (1319 seconds)
Published: Thu Feb 16 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.