Copy My AI Movie Workflow | Pika Labs 1.0 + Runway Motion Brush

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Wow. This changes everything. In this tutorial, I'm going to show you how I generated this animation movie trailer using tools like Runway Gen two and Collapse 1.0 along with other tools for your video creation process. I will also show you the pros and cons of these platforms and the tips and tricks of using these new technology. And what I think is the best tools to use for each stage of your AI video creation process. So sit back, relax, and let's get started. We're going to create this movie trailer in just three parts. The first part is to create the scenes of the story. And there are three steps in this part. First, we'll have to get a story. If you already have a story, you can skip this step. But if you don't have one, you can use ChatGPT to give you a story for your movie. The second step is to get a storyboard for your movie trailer. A storyboard is a visual plan that outlines each scene of a film with rough drawings and brief notes to visualize the sequence of shots. Here is an example of a storyboard. This is from my favorite Pixar movie, Up, and you can see that before they create the actual film, they need to create rough drawings in sequence to visualize the film like this. but instead of a rough drawing, we will create the images using A.I.. To create a storyboard on ChatGPT, we can use this prompt to get a storyboard table. Create a storyboard in a table format for a movie trailer about my story. The storyboard table should include scene numbers, titles, voiceover and detailed scene descriptions. ChatGPT then will give you a table like this. Copy and paste it into a spreadsheet such as Google Sheet for easy organization. I would also add additional columns like this to keep track of image and video generation. The third step is to generate images of your scenes. Use the scene description from the storyboard as prompts for your image generation. You can use your preferred image generation tools such as MidJourney, Leonardo or DALL-E. For this one, I am using my own story illustrator GPT I built on ChatGPT, which is pre-configure with knowledge about my story characters. If you haven't seen the video on how to create a story illustrator bot that can draw customized images for your own story, I recommend heading over to watch it for detailed instructions. In the same way, you can paste the remaining prompts and get all the images. Save all the images in a folder with the proper names I prefer to use the scene number followed by the scene title as the image name for easy organization. So now we have generated the images for our movie trailer. We'll start building our footage. let's begin with the first step converting images to videos. I have experimented with Runway Gen 2 and Pika Labs 1.0. Currently the most popular apps for this particular task. Each has its own sets of pros and cons, along with unique applications. I've included all the links I mention in the description below. First, let's try runway. Runway gives you 125 credits on its free plan to try. And it is probably enough to let you make a short 30 second animation to start. Go to a runwayml.com. Log in or create an account. Go to text/image to video. You can use text to video, but it's harder to control your video style and quality Click image plus description. Here you can upload the images we just created in here as the first frame of the scene. Describe the scene you want to see. The more descriptive you are, the better. Here you can fix the seed number to achieve similar results from the previous ones. I normally check interpolate to ensure the smoothness of the video. If you upgrade, you can also upscale and remove watermark. I haven't found a need to upgrade it this month, so I don't have those options. For general motion you can increase or decrease the intensity of motion. The higher values result in more motion. I generally keep it around or under five, but you can test it yourself. You can also control the camera motion using this panel by moving the slider. You get to see how it will look on the preview window. the motion brush feature of Runway Gen 2 is my favorite feature on runway and it can help you control a specific area’s motion. You can brush over the area of your image you want to control if you want to make your brush size bigger. Use the slider in the upper left. If you accidentally paint too big of an area. Use the eraser tool or undo redo arrows. Once you have a basic mask, head down to the horizontal, vertical and proximity sliders. You can think of these as an X and Y and Z grid X is horizontal movement, left or right. Y is a vertical movement, up or down Z is proximity closer or farther. If you want to learn more about this panel, click the link here for more detail instruction. Click Save, once you're done adjusting your mask. The free preview is when you use only the text prompt to generate the image. since we have a reference image here, this won't apply. Click generate 4 seconds. Let's see. I think the result is good, but it's a little bit too slow for 4 seconds. so in here when you get the result, you can extend for 4 seconds, And you can also ask it to reveal the prompt. if you don't leave this page, you will see the generation on here. when you extend the 4 seconds, sometimes, the image would get distorted like this. One thing you can do is to come to finalframe.net and upload your video to get the final frame of your video. and then you can go to runway or Pika Labs to try it again. And then you stitch them together when you're editing. So they can form a 8 second video or longer if you have the patience. To save, simply click download. If you lose this page, go to assets on the menu and you'll be able to find your generations here Let's see what we got here. This remind me of the Giants from Attack on Titan. the boy got morphed into some black hole or something. Sometimes you would generate something creepy like this. or something weird like this, which I think is a pretty cute or this or this or a completely still one. This one is not moving, is it? this time. This is funny. I think sometimes it is probably luck or maybe I need to learn how to properly adjust the settings before generating to get a better result. Let me know what you think can improve this in the comment section. Let's try Pika Labs web version. Come to Pika.art. Sign in. you will see the explore tab for other people with generator and you can also go to your own library for your own generations. To generate the video, come down to this section where you can input your text prompt and provide a reference image or video for the generation. For today's purpose., we are using image to video generation only so I will upload an image we just created and here you can type the prompt You want to describe the action that is happening in the animation. You want to be as descriptive as possible without confusing the AI. and also I heard it would help to add the style like Pixar style. with video options you can control the frames per second. I normally would leave at default, which is 24 frames per second for motion control like runway, you can control the camera action down here. You can control also the strength of motion. you can experiment on the strength yourself. But I find that it is easier to keep the motion lower at around 0 to 2, since if the motion strength get stronger, the videos tend to come out a little bit unstable, such as people can morph into things or disappear completely, but just try. In the parameters icons, you can also input the negative prompt to avoid unwanted outcomes, such as you can add these in your negative prompts and feel free to adjust as needed if you want to generate with the same seed number of a certain output. You can use the i icon to see the seed number and also other parameters in here. Just note that it will only work if you use the same prompt and the negative prompt, and you cannot use the seed on Discord on the web version for the consistency with the text. The higher it is, the more it is related to the text. After adjusting the setting, click the star button. After you have a generation, you can click the three dots here to add four more seconds to your generation or upscale your video. Just like runway can do. You can click or try to redo the settings or re prompt to adjust the same parameters, but I would just do edits so you can have more options to further fine tune your generated videos. In edit you can modify a specific region you define with this box. For example, we can try to change the yellow shirt to a green shirt what is this or expand the canvas with the preset aspect ratio. sometimes in the expanded version of the video you can see that it edit out some details. For example, the bow of the hair band is gone here and also the flowers are removed in here I want to show you an example of how I use the Modify region function to edit an existing video. I generated this video on the Pika Discord app and the girl is running smoothly but the boy is running in a very awkward way. So I uploaded this video again here on the web version I wanted to edit the leg region of the running boy first, so I specify the leg area in modifier region and type in the text prompt: a boy is running. As you can see, I've generated multiple times to get the legs working properly. So then I edit again the hand region to get the final result. I think is workable. But it is a little bit blurry, though. Somehow it doesn't allow me to change the sneakers to green. I think the learning is to do incremental changes on complicated motions, like running to download. Go to any generation and the download button is here. after you get all the video footage. You may have to upscale some of them to increase the video quality. Pika Labs web version and Runway both have upscale function, which you can use locally at the apps. but if you want more high res video, you can enhance the video to 4K using software like Topaz or Hitpaw But they are not free, though. I used a free video upscaler from Capcut and sometimes you do have to upscale twice to get to higher resolution. I will link these in the description below. Next will need to create the voiceover. I am using ElevenLabs for the voiceover for the narrator and the two children characters. Go to elevenlabs.io Sign in here. You can directly do the speech synthesis here, which is turning your text into speech. They have the preemie voices you can choose from. in here Listen to the sample by clicking the display button. Be here now. Be someplace else later. Is that so complicated? I think Bill is perfect as the narrator of my story. You can adjust the voice setting and the languages Copy and paste the text of the narrator from the storyboard table we did earlier in here and click Generate. In Maplewood, a town where every story begins with a memory. The free version currently allows 10,000 characters per month, if you're just starting. for more voices to choose from. Go to voice library and you can click the filter button and filter out the desired voices. unfortunately, they don't seem to have voices for kids, so I just use a woman voice for my boy character. The world is round and the place which may seem like the end may also be the beginning. Once you find your voice added to the voice labs. And the voice labs only allow three custom voices at the same time for the free version. Make sure to clear out a space under your voice lab if you want to add a new one. Come back to speech synthesis and select. Select the voice you just picked. It would be under generated Adjust the voice settings as you see fit A hidden treasure in Maplewood. it sounds very flat and with all the excitement I wanted to have. I want to try the speech to speech function to see if I can correct this. the idea of speech to speech is it will create a speech by combining the style and content of the audio file you upload with the voice of your choice You can remain the same setting, but under audio you can either upload a file or record your own audio. I don't have a reference file, so I have to do this recording myself. Please don't make fun of me for my poor acting. record by clicking here. A hidden treasure in Maplewood. After you record, you can preview here for a hidden treasure in Maplewood. I think it sounds okay. and then hit generate a he trivia in the award. It doesn't really work, so I think it's probably because of my accent. I'll have to try it again. So delete this, record again, A hidden treasure in Maplewood? A hidden treasure in Nicklewood? I think the accent is really getting in the way. But Eventually I found something that could work. You’ll see. download the audio under here. for children Voices later I found that Play.HT actually have some good children voices to choose from. If you need them, after you. Get into the dashboard, just click create and use standard and premium voices And then when you click here, you can filter out the kid voices. I won't go into this because the video is getting too long. But you can play around. after you have the audio for some clips, you may want to see the lips of the characters with your voiceover. For this, we're using Lalamu which is currently free as a demo and allows you to lip sync with animation. in here, click add new audio. You can directly type in your text, change the narrator and emotion to generate the audio here, but I prefer to upload my own voiceover I generated on Elevenlabs here, pick the audio file you have generated. Upload the video clip here. Make sure to select the right video and audio clips and click “Generate lip sync video”. Wow. This changes everything. the lip-sync is great, but the downside of this is the low resolution. So you may have to upscale this video using video upscaler. Right click on the video and save the video. Okay. So even after upscaling this video twice on Capcut, it still looks too blurry for me when it's a closeup shot for the father shots. I think they look okay, though. So I created a workaround by generating the talking head video on HeyGen and on Adobe Premiere. I created a mask only on the lip section to reveal only that part. Also you have to make some color adjustment and trace the mask on each frame to make it look more natural. I believe Lalamu will come up with HD lip sync soon so I won't go into the details as this is just an idea for you to fix it for now. The final step is to edit the video. due to the length of the video, I couldn't go into details here, so I will quickly go over the ideas. There are plenty of resources that would go into the details on how to edit a video we will use Capcut, a free online editing tool for demonstration. create a new video project, drag all the video clips and audio in accordance to the sequence order, add transition and text on the clips as you see fit. Add the background music. You can use the captured audio library for free as they have a lot of music and sound effects for you to choose from here. I didn't find a music I like on here, so I got mine from Envato Element, which is a great place to get your stock videos, music, images or graphic templates, It gives me better quality elements for content creation sometimes after you finish all the edit, export the video And done. This is how you generate an animation movie trailer. If you want to see the product, like here, This is how you create an animation movie trailer. If you like this one, check out this video where I share with you how to create multiple consistent character using chat up. But I will see you there. So now we have generated the images for our movie trailer. We'll start building our footage. let's begin with the first step converting images to videos. I have experimented with Runway Gen 2 and Pika Labs 1.0. Currently the most popular apps for this particular task. Each has its own sets of pros and cons, along with unique applications. First, let's try Runway. I've included all the links I mention in the description below. Go to a runwayml.com. Log in or create an account. Go to text/image to video. Click image plus description. Here you can upload the images we just created in here as the first frame of the scene. Describe the scene you want to see. The more descriptive you are, the better. You can specify the movement and intensity of the camera as if you were filming in the camera motion. You can fix your scene number to achieve similar results. I normally check this to ensure the smoothest of the video. If you upgrade, you can also upscale and remove watermark. I haven't found the need to upgrade this month so I don't have those options. For general motion you can increase or decrease the intensity of motion. The higher values result in more motion. I generally keep it around or under five, but you can test it yourself. You can also control the camera motion using this panel by moving the slider. You get to see how it will look on the preview window. the motion brush feature of Runway Gen 2 is my favorite feature on runway and it can help you control a specific area’s motion. You can brush over the area of your image you want to control if you want to make your brush size bigger. Use the slider in the upper left. If you accidentally paint too big of an area. Use the eraser tool or undo redo arrows. Once you have a basic mask, head down to the horizontal, vertical and proximity sliders. You can think of these as an x, y, and z grid x equals horizontal movement left or right. Y is a vertical movement up or down that is proximity closer or farther. Each slide is control tool with a decimal point value with a range from negative ten to plus ten. You can manually input numerical value, drag the text, feel left or right, or use the sliders. If you need to reset everything, click the clear button to reset everything back to zero. Click Save, once you're done adjusting your mask. The free preview is when you use only the text prompt to generate the image. It will give you the preview here since we have a reference image here, this won't apply. Click generate 4 seconds. It will generate only 4 seconds. But you can click the extend button if you want to add more seconds to the video you generated. However, sometimes it is, however, sometimes it is easier to control the quality if you just download the video and get the last frame of the video and re-upload to animate again. And somehow, for this running scene I try multiple time here and runway cannot give me a proper animation. It always change the face. It always changed the face of my characters and gave me grotesque results like this one. And this one. For some other images with slight movement, it looks pretty okay though. I think sometimes it is probably luck or maybe I need to learn how to properly adjust the settings before generating to get a better result. Let me know what you think can improve this in the comment section. To save, simply click download. If you lose this page, go to assets on the menu and you'll be able to find your generations here so on runway, it does give you better quality of image resolution when you download, even if it's free, even when you're not upgraded. But I think the best bet is to try runway and also pick up trying both some. Yeah. What else do I need to record that it is okay. it is possible that so it doesn't require a lot of adjustment and tuning to get to the result that you want. I think the best bet is to try on runway. It does give you better quality image even if it's free. So on runway and runway, it does give you better. It does give you better resolution of
Info
Channel: Mia Meow
Views: 6,819
Rating: undefined out of 5
Keywords: ai tools, howtoai, digital products, make money online, kdp, ai marketing, one person business, chatgpt, runway motion brush, pika 1.0, pika labs, image to video ai, ai movie, ai animation, ai filmmaking, ai filmmaking tools, heygen, elevenlabs ai, elevenlabs, lalamu, lip sync ai
Id: pMKR44zYyIU
Channel Id: undefined
Length: 17min 28sec (1048 seconds)
Published: Sun Dec 31 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.