Is Hollywood Using AI?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
there's a brand new company that allows you to shoot a scene and then move your camera around in 3D space with photo realistic video quality a24 and Netflix also created a brand new project utilizing AI tools M journei and Adobe released AI tools this week that you can use right away and Drake himself released an AI generated song I got to say we are living in a wild timeline this is your AI film news of the week I want to kick things off today by talking about a brand new innovation from a team called infinite realities basically the team set up a system where they can shoot video footage to create basically a 3D gaussian Splat as a video but the cool thing is you have the ability to go into 3D and zoom in on the scene and the results look borderline photo realistic so having the ability to put your subjects in any scene and basically shoot the entire environment is really a big part of what I think the future of film making will likely be like moving forward basically on the back end after you frame up your scene you will more than likely use an AI upriser to just get maximum detail on anything that was lost in the process of moving your 3D camera around your scene mid Journey also released a brand new style reference feature that allows you to quickly iterate through many different image examples for Creative Direction let me show you how to use it to use it all you have to do is go to Mid Journey on Discord or the online website if you have access and we will go to the top here and we will imagine a purple Chihuahua then all you have to do is do space-- sref so that's style reference hit space and then type in the word random so what this is going to do is pull a random style so that you can go through the process of creatively imagining what your subject would look like in different styles so it's really fun to use and I like pairing this with the repeater code so d-r and then you can repeat for additional iteration so we'll just say 10 to create 10 prompts at the exact same time and go ahead and click enter so in just a few seconds it generated our results we have this first batch here in this interesting Illustrated style we have these Easter Chihuahua and I don't think these are very purple kind of looks like uh basically like Easter stock photography uh we have this purple Chihuahua here which is just delightful and uh oh this one here that actually does look like my dog Chester if he had purple hair which is fantastic and as you can see these styles are very random they're very different and they allow you to creatively imagine and think through the creative process now I have to show you one image that I saw earlier we basically did the exact same prompt and it generated this Majestic Chihuahua and I was this close to changing the icon on our YouTube channel to this Chihuahua because it's my new favorite image what is uh going on with those eyes they're uh totally going to haunt me tonight Adobe also announced brand new updates to their Firefly model and Adobe Photoshop the cool thing is these features are available for you to use right away let me show you how to use them the first big announcement is that adobe Firefly 3 is adobe's most powerful AI image generator yet the quality is really nice but is it as good as some of the other image generators on the market well that's what we're going to test out so if you hop over to the Adobe Firefly website that's firefly.com you can click on the generate button to go to the Firefly image generator now we'll keep things uh pretty basic here we'll say a cinematic still of a woman in a SciFi film okay pretty basic and then we'll also say shot on film just to reinforce kind of the Cinematic hyper realistic imagery and we'll make sure that it's only cinematic and Hyper realistic and not fantasy or art so that it lives in a more photorealistic world and go ahead and click try prompt okay so let's take a look at our results here you can see that we have four Images here and they're basically all on a subway and and I will say that the quality does look a bit better than what we traditionally had using Adobe Firefly but one of the problems is most of these images still are coming across more as editorial remember adobe's model was primarily trained on adobe's stock library and so a lot of the images can kind of look like stock photography that being said the images really do look pretty darn good but let's go ahead and compare that to the same image prompt inside of mid Journey so if we take a look at the mid Journey images we can see that they are very very filmic in fact the consistency even in the light in her eye a lot of times whenever you are trying to study the light direction for a scene you look at the Reflections in the subject's eyes and that will tell you a bit about the lighting setup and it looks like they are uh lighting with a bounce card uh or a soft box under the subject and actually there is that uplighting under her chin so it actually is pretty accurate to the way that the light uh would be rendered on set and uh there's just quite a few other images here that look really really good so at this point for a cinematic aesthetic Firefly still isn't quite as good as mid Journey but I do want to try out a different style as well so let's go back over to Firefly and I'll paste in a purple Chihuahua drinking coffee in the style of 3D animation so let's go ahead and click generate so let's take a look at our results we have this image of this Chihuahua he's just enjoying his espresso on a nice uh snowy day he kind of reminds me of like the most cozy version of the this is fine dog it's uh it's great now I will say that these images are coming across more as fantasy art than 3D animation so again Adobe Firefly I'm not loving the results that were getting from the model just yet now again let's go ahead and compare that to what we get inside of mid Journey so I'll say a purple Chihuahua drinking coffee in the style of 3D animation and let's take a look at our results we have option number one and that dog looks really really old poor buddy and uh we have this one here which which looks pretty good I like his little uh his fingers holding the the cup there we have this one that the coloring looks a little too purple but it does look pretty good in fact I think he may have a third Arm coming from uh the middle of his body there so I guess he lived near like a radioactive plant and then uh we have this one here which I don't know is that like a mixture of like a raccoon and a chihuahua I'm not entirely sure but it does look pretty cinematic and uh the 3D animation style does look pretty good so as you can see having the ability to Art direct your scene to get even better results is essential even if you're using a tool like mid journey and that brings us to our game of the week for this week's game I want you to tell me which image was created in which image generator we have have the same image that was created in mid Journey Firefly Runway and Dolly I want you to tell me which one was created in which tool the winner will get some free swag from Curious Refuge you can let us know your answer in the comments of this video this is image number one this is image number two this is image number three and this is image number four if you think you know the answer let us know in the comments Adobe is also bringing the ability to do generative expand in the mobile version of Adobe Firefly so if you want to take the edges of a frame and expand it you can use their online tool now right now it's only in a public preview but this tool will be going to the cloud very very soon so just for example we can type in a prompt here let's say we had this reference image of this Majestic tiger which also has some cheetah print not too sure what's going on there and let's say that we want to expand the frame let's just say a waterfall and go ahead and click generate and in just a few seconds it generated our result uh option one looks terrible option three looks pretty bad and option four I think in the background there it could be argued that that is an outof Focus waterfall so I'll give Adobe Firefly the point here it does look like the image was was expanded in a realistic way so as you can see pretty soon you'll be able to use this feature on the internet now this feature is actually already available inside of Adobe Photoshop 2024 or the Adobe Photoshop beta let me show you how to use it just go over to Adobe Photoshop and upload any image that you want and then go over to the crop tool in the top left you can hit the c key for a keyboard shortcut now we want to expand our frame so there's many different ways you can do this you can of course just grab the edges and expand like that you can grab Corners expand like that you also have the ability to hold down option and you can expand from the middle which is really helpful if you want to keep uh just the the middle as the center of your frame and you can hold down option and shift to expand proportionally from the middle here so I'm going to expand up to there I'll let go and then we'll click on this generative expand button and go ahead and click generate okay and in just a few seconds it expanded our frame and you can see it did a really good job with this image I'm always impressed with adobe's generative expand feature I feel like it does a much better job than other tools that are available on the market and on that note Adobe also released a ton of new features inside of the beta version of Adobe Photoshop let me show you how to use them now to get the beta version of Adobe Photoshop because a lot of times people have trouble with this all you have to do is go to the Creative Cloud panel go to the beta version and then click on Adobe Photoshop now you may have issues with opening it or updating it if you already have the Photoshop beta installed on your computer so if that's the case just click on the Photoshop beta text hit these three dots here and go to other versions and then you can install the latest version of the Adobe Photoshop beta for some reason and people can get hung up there so that's how you install it on your computer so now that it is installed on our computer I want to show you some really interesting things the first thing I want to show you is the fact that you have the ability to use the Firefly model directly in Photoshop so if we just go to edit and go down to generate image we can generate an entire image so for example we can say a desert with strawberry fields that's a desert so there's only one s there not a dessert with Strawberry Fields and we will go to photo and you can upload a reference image I'll show you that in just a second but let's go ahead and click generate and as you can see it generated the image from our prompt and it actually did a pretty good job this really does look like some of the Farms that are outside of Los Angeles and all these examples look pretty good I do think that there's something a little wonky with this last example here especially because the desert seems like it's close to an ocean over here which is uh pretty interesting I don't think it's best to grow strawberries near an ocean and a desert but hey I'm not a farmer now you also have the ability to create an image using a reference image which is very similar to image reference in other tools it's really easy to use all you have to do is go to edit and then go to generate image and we will go ahead and click on the reference image here and you can pick the presets or you can choose your own image for our example we will use this image of this plant on a red background and we will go ahead and click open and I actually want to change this up to where it is a coffee cup with a plant on a red background so that's what we'll prompt for you can of course go in and set it as a photo you can add in effects if you want but I'll just keep this as a default and go ahead and click generate and after a few seconds it created the image and I got to say it looks really great I think it did a great job at combining the reference image with the final image result here so we have these different images now these ones look more just like regular pots that kind of looks like a ramkin but this one has the coffee handle and I think it did a great job there's also another cool feature that I want to show you real quick so we have this uh image of a breakfast here and you know what this is missing is a strip of bacon so let's go ahead and add some bacon in I'm just going to draw shape like this we'll click generative fill and we'll say bacon and go ahead and click generate and you can see we have some images of the bacon here let's go ahead and pick our favorite this one looks raw so I don't know if you want to eat that bacon this one looks perfectly cooked that one does look a little undercooked as well so we'll pick this one here and let's say that you like the way this bacon looks but you want there to be subtle variants of this bacon well previously you didn't have the ability to reference the style of a generation to get subtle variants but now you can all you have to do is go to the image click on the three dots and click generate similar now it will generate similar images that use the reference of the image that you created so let's take a look at the results here so you can see we have the bacon and it has those crispy edges I think the middle might be a little raw but you know it's uh it's not bad this one I don't know if it did a great job but it is at least consistent with the crispy edges and same thing with this one now let's say that we liked this generation here and we wanted to use it on our professional project one of the problems with using generative fill is it actually is not creating a Pixel Perfect version of the image that is generated typically if you have a big canvas it will actually pixelate the areas that it used generative fill but with the new feature from Photoshop you can actually get more detail in those areas so to show you this I'm just going to zoom in on this bacon here and you can see you see how nice and sharp the egg is here but the bacon is pretty soft it's it's not exactly the same resolution well in order to fix that all you have to do is click on this new little icon here that says enhance detail you can go ahead and click on it and it takes just a few seconds and you can see that the details are now enhanced so there's more texturing in this bacon and it really does match the overall visual aesthetic of the egg here it looks like this higher part is in Focus maybe very similar to the toast and then the lower part is a little more out of focus the next feature that I want to show you is the abil to not only remove a background but also generate a brand new one so we have this product photography of this shoe here and if you want to remove a background super easy barely an inconvenience all you have to do is Click remove background and now the background is removed but we now have a brand new button that says generate background so we'll go ahead and click generate background and now we can say what we want to see in the background for our example I'll say a jungle with birds and go ahead and click generate and let's take a look at our results here we have option number one which looks terrible option number two which looks terrible and then option number three which is uh perhaps the worst one yet so as you can see this feature in select instances is pretty good but in most practical applications you're really going to have to Art direct it back and forth to get a result that looks really good now even though these tools have a ways to go I think the fact that adobe is allowing you to use these features directly in the professional applications and in the online version of their website is really showing how big-time creative companies are embracing generative AI Technologies I think that over the next few months and years this technology is going to dramatically improve and change the way that we approach our creative projects not to be outdone the team at stable diffusion released their version 3 API this last week and they are saying that stable diffusion 3 is equal to or outperforms the state-of-the-art text to image generation systems such as Dolly 3 and mid Journey V6 in typography and prompt adherence based on human preference evaluations well let's see about that our first comparison here we have stable diffusion 3 on the left and mid Journey on the right The Prompt was a portrait of a photograph of an anthropomorphic tortoise Seated on a New York City subway train now both of these images look pretty darn good I think the one on the left looks more like a modern photo the one on the right looks more like a vintage photo now I think I prefer the stable diffusion one because it does look more photorealistic there is some wonkiness going on with the hand of this Turtle over here whereas the hands over here look more realistic but generally speaking I'd say it's a pretty close competition stable diffusion really Prides themselves in generating images that have really Act accurate text or creative Direction so let's take a look at some of those examples the image on the left was created with stable diffusion 3 and then the image on the right was created with mid Journey version 6 I'd say that they both look good but ultimately I think Sable diffusion did a much better job not only at creating a really interesting type face but also with the creative direction of the overall scene we also have this scene of a red sofa on top of a white building graffiti with text that says the best view in the city the one on the left is stable diffusion the one on the right is mid Journey I'd say the one on the right looks more photorealistic it really does look like a still from downtown Los Angeles basically whereas the one on the left definitely has this more stylized aesthetic and the graffiti looks more just like text that was composited as opposed to text that was actually in the photograph we have these images of a vintage photo RH with a man who has a TV for a head I'd say that it's basically a tie the one on the left was a little more accurate with the pastel color grading aesthetic whereas the one on the right has a few more details that make it seem a little more realistic but it's pretty darn close and then finally we have this image of a box where it was prompted to say they say it's not good to think and hear and the one on the left says the words because it was created in stable diffusion the one on the right that was created in mid Journey it's close but it doesn't actually say the exact words that we are going for so generally speaking I think that stable diffusion 3 is a huge step forward in many instances it is better than mid journey and because it is open source and you have the ability to generate the images locally on your machine I think that this is a great tool if you're looking to get more into advanced AI imagery and while we're talking about Advanced AI training I should note that enrollment for our AI film making and AI advertising course will start on May 1st this is already shaping up to be one of our most popular sessions ever so we would love to have you in the program it's been really incredible to see meetups All Around the World featuring curious Refuge students we just hosted a Meetup in San franisco where folks had an amazing time getting to know one another and we're hosting an AI film making me up in London very very soon if you want to see if any AI events are in your area be sure to check out the AI film events page over on Curious refuge and I should note that coming up in the very near future is the runway Film Festival we are super excited to attend the LA version of that event on May 1st there's also the New York Festival that will happen on May 9th you can check out their website to see if any tickets are still available if it was also a big week for the use of artificial intelligence on big Hollywood projects the first big news comes from Netflix where they released a documentary called what Jennifer did now most people watched it and didn't notice anything strange but a few people saw the documentary and felt like some of the details looked off her ear kind of looks a little weird in some of the scenes and there's a scene where she is giving a pieace sign and she's kind of missing a finger and some of the elements inside of the image look a little off what probably happened is they dropped a low reses photo inside of magnific for example this is an image of Jennifer pan and basically they probably used the uprer and it resulted in an image that looks like this you can see that it kind of looks like the original image but there are definitely things changed and if you go down here and turn down the resemblance to where things can change even more it will make it look photo realistic but the details will dramatically change now I don't personally feel like it's weird to use AI generated visuals in a documentary anymore than it's weird to do reenactments for a documentary what's weird is if you don't tell people or imply that you used AI generated imagery in that documentary when you're trying to make it seem like it was a real asset so I think it's really important for documentarians to just let people know that there are AI generated images that are inside of the documentary whenever ever people are watching the project next up a24 used AI to create some movie posters for Alex Garland's new Civil War film the movie posters are actually pretty cool and of course people started choosing sides they started arguing and oh yeah this is definitely a marketing stunt but you get the idea AI is increasingly used at every stage of the Hollywood production pipeline from pre-production all the way to marketing and distribution we also have a really interesting story coming from TCL the people that make televisions they actually came out with their own TV plus platform that is really billing itself as one of the first AI studios in the world they created a trailer for an AI generated romcom you can find a link to the trailer below this video but it is really interesting to think that a major company is investing so heavily in exploring AI generated content because most major studios are unwilling to press too heavily into AI generated content companies like TCL are coming in with their own service and speaking of AI generated films I want to let you know that we just launch the world's first AI film trailer competition if you want to show off your skills and win an apple Vision Pro this is your shot to be seen on the biggest stage in in AI film making we've partnered with submachine to bring you a competition that will be incredibly fun we'll have expert judges from Curious Refuge Submachine the TV Academy and we even asked Matt wolf to be a guest judge for the competition you'll find more information about the rules in the link below this video you have until June 6th to turn in your submission and speaking of Matt wolf we actually had him on our podcast recently and had had a wonderful time talking about the evolution of AI as it relates to creativity and the workplace Matt is an incredibly gifted YouTuber and AI expert so it was really fun learning some tips and tricks from him we'll be releasing that episode very soon in the world of white papers there was an interesting development that came out of the University of Hong Kong where basically they created a system that can take a 3D scan of a city and allow you to change the lighting dynamically for that scene so basically if you want it to be night it will become night and the actual shading and texturing on the buildings will change to match it's really cool and I think it does showcase the future of how generating establishing shots will be much easier using a tool like this another team at the University of Hong Kong also released a white paper that basically allows you to upload a reference image and remove the shadowing the reason why that's important is if you have shadowing in your image and you use a 3D modeling tool A lot of times it will create weird distortions but the thing is if you have a flat texture reference it allows you to go in and change the lighting so that it looks more realistic in a 3D environment one of the biggest challenges when you're working on an AI image project is having the ability to change the orientation of your camera we've gotten good at doing prompting related to top view or worms eye view if you want to get below but the truth is those are really just hacks what if you had the ability to actually control your camera in 3D space well there's a brand new tool that looks to do just that basically you have the ability to change the orientation of your camera you can go around your subject and it will generate a photo realistic result this will have of course huge implications for the future of not only film making but also advertising basically picture yourself being able to do a virtual photo shoot of your product without actually having to have that product in front of You Drake also released an AI song this last week long story short he is feuding with Kendrick Lamar right now and so Drake thought it would be a good idea to release a song that featured AI clones of Tupac and Snoop Dog there's a link to the song below this video but if you kind of listen to Tupac and snoop's voice it doesn't sound quite right Drake sounds a little more smooth but all of the other artists sound a little like low Fidelity but it does seem like mainstream artists are beginning to embrace AI as creative tools to get their ideas out there the team at sinthia also released AI avatars that look pretty darn realistic this comes on the heels of our announcement last week about Vasa 1 from Microsoft that basically allows you to upload audio and get a realistic Avatar in return you'll find a link below this video to check out the avatars they basically are able to predict Expressions just from a script and they have much more natural facial features and head movements and they all come with matching voices and really realistic lip sync the beta should be available by the time you are watching this video you'll find a link below we also came across this interesting breakdown from Paul trillo basically he showed how he put together a recent music video and he actually broke his scene into smaller pieces and rendered each one of those scenes inside of Runway to create the animation and then stitched it together basically he got more Fidelity from the AI tools by combining them we actually had the chance to sit down with Paul at NAB 2024 to talk about this specific project I think stitching videos together to create an entire scene is a really smart way of thinking about AI VFX and that brings us to our AI films of the week speaking of Paul trillo we're going to take a look at his project that he created using Sora for Ted basically the video is a flyth through virtual experience where you go through all sorts of really interesting worlds from Laboratories to farms and the VFX look really really breathtaking I think it's really interesting to see large organizations like Ted Embrace AI not only for their marketing but also they just released an AI podcast with our friend Bala you should check it out it's going to be an amazing podcast our next project comes from Ryan ven who put together a short film about the lonely man in the world basically it's all about an astronaut that gets stranded in space and I think Ryan did a really good job at working around the limitations of AI I think there's really beautiful shot selection great movement and the sound design is on point so fantastic job on this project Ryan and then finally the last film that I want to showcase is Max the robot from Dave Clark who's the instructor of our AI advertising course it's basically a a 1950 sitcom about a boy who's a robot and it has great curation and let's just say that things end poorly thank you so much for watching this week's episode of AI film news of course you can subscribe here on YouTube to get the latest AI tutorials and news directly here on the platform you can subscribe to our newsletter to get AI fil news sent directly to your inbox again be sure to check out our AI film competition page to get more information about how to enter that competition thank you so much for watching we'll see you in the next episode
Info
Channel: Curious Refuge
Views: 24,016
Rating: undefined out of 5
Keywords: ai, ai news, artificial intelligence, ai video editing, ai film news, midjourney, midjourney tips, midjourney style tuner, ai art generator, stable diffusion 3, stable diffusion, stability ai, ethical ai use, hollywood ai, ai movie, ai film, ai filmmaking tools, ai short film, ai movie trailer, midjourney ai, ai film competition, ai film contest, ai filmmaking course, runwayml, pika labs, ai video generator, free ai video generator, adobe, adobe cc, curious refuge
Id: HPKv7h52sM8
Channel Id: undefined
Length: 30min 40sec (1840 seconds)
Published: Fri Apr 26 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.