The Insane AI News Happening No One is Noticing

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's break down all the stuff you probably missed in the world of AI this week although it was a bit quieter of a week in terms of big AI announcements there was still quite a bit of notable things happening that I think are worth talking about starting with the fact that gp4 is actually getting faster now in previous weeks I talked about how people seem to believe chat GPT is getting Dumber which has been somewhat debunked by now but it is definitely getting faster this company Port key has been keeping an eye on it you can see the the line in red is GPT 4 the line in blue is GPT 3.5 we're getting to a point where GPT 4 is almost as fast as GPT 3.5 there's just a teeny tiny margin left this week open AI also announced that Dolly 3 is now available in chat GPT plus and Enterprise now most people actually got access to it last week but open AI made it official I'm assuming this means it's pretty much rolled out to everyone now although the fact that we can now use Dolly 3 was in last week's news video as well do want to use Dolly 3 just log into your chat GPT account make sure that you are a chat GPT plus subscriber hover over your GPT 4 button here and select Dolly 3 and now you can make some amazing art like this space Wolf chasing after a taco this week Rowan Chung also pointed out that open AI quietly changed their core values and in this change they updated it to focus on building AGI so here's what the core values looked like before the change here's what they look like now and if you notice there's now a section that says AGI Focus we are committed to building a safe beneficial AGI that will have a massive positive impact on Humanity's future anything that doesn't help with that is out of scope so they've made it absolutely clear what open ai's Real purpose is now now many people have a different definition of what AGI actually means but Rowan also shared this video clip of Sam Alman giving his definition of AGI AGI is basically the equivalent of a median human that you could hire as a coworker and then that they could say do anything that you'd be happy with a remote coworker doing just behind a computer which includes you know learning how to go be a doctor learning how to go be a very competent coder there's a lot of stuff that a median human is capable of getting good at and I think one of the skills of an AGI is not any particular Milestone but a meta skill of learning to figure things out and that it can go decide to get good at whatever you need also in open AI news this week open AI decided to shut down one of its AI programs they were working on an AI model code named arus named after the desert planet from the Sci-Fi Dune the goal was to power AI applications like chat GPT much more cheaply however they announced this week that they decided to scrap it because it struggled to meet expectations around efficiency not much else is known about this project arus but it sounds like we're never going to see it anyway this week anthropic made a major update to their clad platform making it now available for users in 95 countries I personally use Claude at least as much as chat GPT if maybe not a little bit more it's probably the best tool available right now for summarization of long PDFs CSV files transcripts things like that Claude hands down is better at that than chat GPT just because it has a much larger context window and now 95 more countries are going to be able to access that as well however in some bad news for anthropic Universal Music is suing them for Distributing Son lyrics so essentially if you were to go to Claude and type give me the song lyrics for Roar by katd Perry it will actually write out all of the lyrics for you since song lyrics are actually copyrighted Universal is saying that anthropic not only distributes copyrighted material without permission but that it used these to train its language models they go on to say anthropic copyright infringement is not innovation in leman's terms it's theft the thing that I find interesting though is if I just went to Google and said Roar by Katy Perry lyrics Google gives me the lyrics Right Here ALS also for free and I've been able to do that for like two decades now however if you do ask chat GPT the same question it will say sorry I can't provide the lyrics to copyright songs would you like a summary instead personally I think there needs to be a total overhaul of what copyright law means because it's just getting harder and harder the lines are getting blurred between what's copyrightable and what's not we're seeing this with images we're seeing this with song lyrics we're seeing this with people's voices pretty soon we're going to see this more and more with video generation and as AI gets more and more prolific and more and more people use these simple tools to sort of recreate anything it seems like some of these bigger companies like Universal are just going to end up over their heads in potential lawsuits for all of these creations that are made now I'm not saying that it's necessarily right that people use copyrighted material I just think that the way copyright Works needs to be rethought I don't personally have the solution for that I'm not claiming I do I just think copyright needs to be rethought in this post AI world that we're moving into also this week inflection AI the creator of the pi chatbot announced that Pi can now search the web so any chats that you have with pi are up to dat as of today so if I go to pi. a/t talk I can ask it super recent questions for example who won the boxing match between Logan Paul and Dylan danis a really really stupid viral boxing match Logan Paul won the fight although it was a bit of a strange one Dillan danis refused to strike and instead tried to Grapple with Paul which resulted in him being disqualified in the final round it was a pre- chaotic fight with a mass bra breaking out at one point in the end Logan Paul emerged Victorious did you watch the fight or are you just curious about the results that is pretty upto-date information seeing as that fight happened on October 14th I think I didn't actually watch it I actually just learned the result of it right now by asking pie this week Stanford put out a report here that they call the foundation model transparency index they compared 10 different chat Bots to find out which ones were the most transparent they were scored on a 100 different indicators things like how the data was trained the labor that went into it the compute power necessary the risks the distribution feedback and much more and basically what it found was that meta and their llama 2 they've been the most transparent followed by hugging face then open AI then stability all the way on down to Amazon being the least transparent with their Titan model concluding that developers can significantly improve transparency by adopting best practices from their competitors and that the open foundation model developers are leading the way I find it interesting that open AI actually beat out stability AI because stability most of their models are open and available for people to play around with and finally in other large language Model news this week by do a company out of China says that its Ernie AI chatbot is not inferior in any aspect to GPT 4 they gave a keynote earlier this week and they they demonstrated improvements to four different capabilities understanding generation reasoning and memory basically claiming that it's as good as GPT 4 now Ernie 4 isn't publicly available yet but even if it was so far these various Ernie Bots have only been available to use in Chinese so I likely wouldn't be able to use it anyway and since we're on the topic of China here there's been a new US restriction passed making it so that Nvidia can not send their high-powered AI chip to China they've already been blocked from sending the h100 chips because they were too powerful so Nvidia went ahead and made a lower speec h800 chip and was exporting those to China however the new restrictions just closed that loophole and now Nvidia can no longer send h100s H 800s or even a800 chips to China apparently all in an effort to sort of win the AI race with China now let's shift into talking about AI art starting with the sponsor of today's video wirestock doio If you're not familiar with wirestock they are a single platform where you can upload stock phography that you created or even AI generated images that you created and it will submit it to all of the stock imagery platforms on your behalf it will even title the images add tags to the images create descriptions to the images and inform the stock photography sites that these are AI generated so that you can stay compliant sites like Imago Adobe stock 123 RF dreams time and free pick all allow AI generated art so you can generate an AI image submit it to wirestock and it will create all the metadata and submit it all these sites for you it's super fast wir stock even has their own Discord bot so if you're familiar with tools like mid Journey you'll feel right at home with wi stock it allows you to generate custom images mix multiple images together and even reimagine existing images so you can upload an image that you already have tell it to reimagine it and it will create more images just like that one so if an image is is selling well for you already on a stock photo site reimagine it and submit more just like it wire stock also gives you the ability to create custom collections I can add this image call it pixel collection create the collection and add more images straight into this collection now I can link people directly to this collection they can purchase the whole collection at a collection price and if you're a premium member of wirestock you get to keep 100% of the earnings and if you're curious wir stock even sent me a handful of images that are performing well on wir stock right now so if you want an idea of what images to generate here's a few we got a landscape a tractor a busy street this cool sort of surreal reef image this cool background image and this flooded basement these are images that performed well you can learn more about wirestock by heading over to wirestock doio and if you use the coupon code Matt 20 you'll actually get 20% off your premium membership so check that out over at wirestock doio and thank you so much again to wirestock for sponsoring this video now let's check in with Ali jewels on this week's mid Journey office hours update they did add one new major feature inside of mid Journey this week if I generate an image that I really like let's go ahead and upscale number two here I now have the option to further upscale this image 2X or 4X allowing me to get much much larger images without loss of image quality they also mentioned that their phase one of their new website is going to be released next week this isn't going to be a platform where you can generate images directly from the website yet but their phase two of the website supposedly is going to come out sometime next month which should give us the ability to generate straight from their website and finally get mid Journey out of Discord other mid Journey news this week mid Journey partnered with a game company called sizigi Studios to launch Nigi Journey which is both an IOS and Android app which allows you to generate anime style images directly within the app I haven't personally used this one myself yet but it appears to give us another option to generate mid Journey art style images not necessarily realistic photos in the last week and a half this YouTube video has gone viral with almost 3 million views from Peter whidden and in this video he shows how he actually trained an AI to play through Pokemon he was able to run several instances at once of an AI character running through the game and used reinforcement learning with a pointbase system so that when the character did things like collected Pokemon or uncovered new parts of the map it earned points causing the AI character to play deeper and deeper into the game and collect more and more Pokemon I highly recommend if you love AI as much as I do watching the entire 33 minute video CU he breaks down exactly how it works the methodology behind it the results from it and he even created a GitHub so that you could go and download the code and run run these Pokemon simulations yourself with a Pokemon emulator now a few weeks ago I talked about stack Overflow and how they were developing their own sort of AI well this week it was announced that stack Overflow laid off over a hundred people as the AI coding boom continues that's 28% of its staff now if you're not familiar with stack Overflow it's essentially a site where people can go to get help with coding when they run into an issue they ask questions on stack Overflow and then helpful people on the website help them answer their code questions well the problem is now instead of using stack Overflow a lot of developers are just going to tools like chat GPT and Claud to get help with their coding when they run into issues there's also tools out there like GitHub co-pilot and Amazon's code Whisperer which just help people develop the code and troubleshoot as well the site stack Overflow has seen a major hit in the amount of traffic that it actually gets YouTube announced this week that it's getting a new AI powered ad feature that lets Brands Target special cultural moments so so essentially if you have a product for instance that's related to Halloween you can say I want to advertise this product on any videos talking about Halloween and YouTube's AI will try to find the videos that are about Halloween and work your ad into those videos so instead of specifically targeting certain channels or keywords it uses AI to find videos on maybe unrelated channels but that happen to be talking about the specific event that you're trying to Target also this week descript announced some new big AI features they've added some new AI voices to their platform we're introducing an allnew version of our in-house AI voice tool we basically took our beat up old AI into the shop rebuilt the engine and outfitted it with a set of all new space age capabilities like text to speech and lightning fast authorization the old overdub could get you where you wanted to go but with our new AI voices we're handing you the keys to a rocket ship no more waiting 24 hours to train or verify a voice and the quality is much much better so good that it's now legitimately possible to create an entire voice over without recording a single word and speaking of AI voice cloning the New York city mayor Eric Adams used AI to make Robo calls in languages that he doesn't speak he sent thousands of calls in Spanish more than 250 in yish more than 160 in Mandarin 89 in cantones and 23 in Haitian Creole and a lot of people weren't happy with it with one person saying this is deeply unethical especially on the taxpayer's dime using AI to convince New Yorkers that he speaks languages that he doesn't is deeply orwellian Adams actually made a statement at a press conference saying people stop me on the street all the time and say I didn't know you speak Mandarin he doesn't this week meta showed off some research called towards a realtime decoding of images from brain activity now we've seen some of this kind of stuff before previous examples used MRIS is requiring people to lay down in a giant machine and scan their brain as they're seeing images this new method from meta uses what's called a Meg or magnetoencephalography this allows the researchers to see what the brain is seeing in real time without the invasiveness of using MRIs So This research showed people an image and then essentially read their brain waves to interpret what they were seeing the image on the left here is what was actually shown to the person the image on the right is actually what was decoded from the brain waves so you can see they were shown cheese this is what the brain saw or what was interpreted now these aren't perfect replications obviously this is like a cheetah and it's seeing a weird like monkey thing but this is getting really fascinating we're getting to a point where AI is helping read brain waves and interpret what the brain is seeing here's some more examples of the viewed image of a surfer here's what was seen with the scan here's a horse here's what was seen with the brain scan plane a plane I mean it's all pretty dang close to the original image and with meta actually being one of these companies that's trying to sell us headwear like The Meta Quest 3 I don't know how I really feel about meta trying to read our braid waves is this going to go down the path where some of this technology is put inside of this wearable technology and meta is reading our brain to show us advertising I doubt going to go that far the world as a whole would probably push back on that but it feels like meta is getting close to having the technology to be able to do that all right let's talk about robotics for a minute this week Amazon said that their new AI powered robots will reduce fulfillment Time by 25% which to me is just crazy because we can already go on Amazon press buy now on something and literally have it the next morning on our doorstep and in some cases I've ordered products and 2 hours later they're sitting on my door doorstep and Amazon saying they're going to reduce fulfillment Time by 25% with these new robots I don't have a ton of information about what these robots actually look like or how they're going to speed up fulfillment time I just find it absolutely fascinating that with how fast Amazon already is they're still trying to make it faster it's only a matter of time before we place an order on Amazon and a drone is dropping it on our doorstep 15 minutes after we order it in Dubai they're actually getting AI powered self-driving patrol cars that are also equipped with drones these things are designed to patrol neighborhoods they've got 360° cameras on them and they've got facial recognition technology they also have a drone on board where the autonomous robot could launch the Drone to see into areas where the driving robot can't see apparently these are just essentially designed to gather information and if it sees something out of the ordinary report it back to the police so the police could come and take care of it but this does feel a little bit like a step Clos closer to like a robot police force that could be out there autonomously patrolling the streets for us and I don't know how I feel about that yet this is kind of creepy honestly but let's keep talking robots this week this video of figure one Dynamic walking robot was released and it shows a robot with a humanoid form that can actually walk around you can see it here walking forward sort of like a human maybe a human with a bad back or something they claim this robot is the first of its kind AI robotics company bringing a general purpose humanoid to life at the moment the goal of these robots are to help with undesirable jobs they say in the future we believe humanoids will revolutionize a variety of industries from corporate labor roles to assisting individuals in the home to caring for the elderly and to building new worlds on other planets however first applications will be in Industries such as manufacturing shipping and Logistics warehousing and Retail where labor shortages are the most severe this week Nvidia announced Eureka extreme robot dexterity with large language models this is an improved reinforcement learning technique that they actually figured out how to use to allow robots to do complex things with their fingers as one of the first tests that they use this new reinforcement technique on text to video has been all the rage lately we've gotten Gen 2 we've gotten Pika Labs we've gotten Moon Valley there's all sorts new AI text to video models out there and today I came across this one called morph Studio which you can see some examples here and this one claims to generate between five and 7 second Clips but not only that it will actually generate them in high resolution so if we pop into their Discord let's give it a try real quick I type slide and give it the prompt of wolf walking in the snow it took about 3 minutes and here's the video I got from it really really impressive actually it only generated 3 seconds but let's take a peek at it in full screen pretty decent quality as well so at the moment it seems pretty similar to Gen 2 or pabs or Moon Valley but pretty impressive quality just looking at some of the other Generations that have been here this time-lapse of the Milky Way galaxy looks really cool this happy woman here looks really good swim in the sky blue fish I mean these are pretty good other than the fact that that fish had a tail face for a second there but this one was actually 7 seconds so it's definitely capable of generating some longer videos from time to time it looks like maybe you got to give it this extra- S7 to get the longer videos though but as of right now it appears to be free to use you can find it over at morph studio.com and joining their Discord here's another tool that I came across this masterpiece X company partnered with Nvidia recently to generate these 3D animated assets from a text prompt now I haven't tried this out yet I click on generate 3D now you can see it starts us off with 2 50 credits and the credit cost to generate is about 50 credits you can generate objects animals or humans let's go ahead and generate an animal and you can see the estimated time is 2 to 8 minutes per 3D model I'm going to click next step to add shape details let's just go ahead and do a wolf next step to add paint it gives us some guidelines here hyper realistic 4K gray and white I click advaned settings it gives us some negative prompt options and let's click generate 3D models it says it may take 2 to 8 minutes for each model and it now says processing up here and we have our wolf model you can see it took about 3 minutes so let's go ahead and click on it and I would say it's got more of a cat look to it than a wolf but not bad I mean we've come a long long way when I've used Common Sense machine cm. that took a lot longer to generate interesting cuz if I look at it from some angles you can kind of see like the wolf's nose right here like the front straight on looks like it's supposed to be a wolf but then when you turn it it looks like the nose kind of got put on the neck or something so not quite perfect but it looks like an interesting cat now this tool also claimed that it can animate whatever you generate as well however it looks like the animate option is only for the human models so let's go ahead and create a human real quick let's generate a pirate I'm just going to throw in some keywords here like 4K and art station and cartoon and now it gives us the animation option so we've got bones only which looks like we can rig it for other tools or we can have this tool animate it for us let's do bones and animation and let's do it punching and let's generate and see what we get with an animated human model well it took about 6 minutes to generate this one probably because of the animation let's take a peek okay I mean it's punching I guess looks more like a zombie mutant than a pirate but again this is as bad as it's going to get and the amount of time it took to generate this versus what something like Common Sense machines does is I mean just a fraction of the amount of time generated this in 6 minutes CSM would have generated this in like 24 hours and it wouldn't have been animated so pretty cool pretty fun to play around with I'm sure if you figured out the prompting and got better with the prompting you can generate stuff much better than this just looking through some of the examples here like this tiger and these people and this horse other people are getting some much better Generations out of it than what I'm getting so it's definitely got some promise I just haven't spent enough time figuring out how to prompt it properly yet and there you have it there's the news Roundup in the AI space for this week again it was a bit of a slower week in terms of announcements but still a lot of really really cool stuff happened and some new tools and toys that we got to play with got released so that's always a fun time if you enjoy this video make sure you check out future tools. where I curate all of the latest AI news on a daily basis and I update the homepage with all of the coolest AI tools that I come across I also have a Weekly Newsletter if you sign up for the free newsletter I'll send you the latest AI news and the coolest AI tools directly to your email inbox you can find it all over at futur tools. so check that out and if you enjoyed this video and you want to stay in the loop with the latest AI news and tutorials make sure you like this video And subscribe to this Channel and I'll make sure more videos like this show up in your YouTube feed thanks so much for nerding out with me today I really really appreciate you and thanks once again to wirestock for sponsoring this video you guys are awesome as well I hope you learned something new and discovered some cool new AI Tech or news cuz that's always my goal with these videos and I'm always having a blast making them and sharing them with you so thank you once again really appreciate you see you in the next video [Music] bye-bye
Info
Channel: Matt Wolfe
Views: 148,324
Rating: undefined out of 5
Keywords: AI, Artificial Intelligence, FutureTools, Futurism, Machine Learning, Deep Learning, Future Tools, Matt Wolfe, AI News, AI Tools, ai news, ai tools, openai, anthropic, robotics, free ai tools, best ai tools, top ai tools, new ai tools, ai, ai tools 2023, chatgpt, chat gpt, anthropic ai, matt wolfe, openai chatgpt, future tools, open ai, futuretools, ai video, gpt 4, ai art, news, midjourney, meta, ai video generator, chatgpt 4, robot, text to speech, Descript, Nvidia
Id: K7M7-lCbOgg
Channel Id: undefined
Length: 25min 3sec (1503 seconds)
Published: Fri Oct 20 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.