Did AI Just End Music? Ft. Rick Beato

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this video was brought to you by brilliant hi welcome to another episode of Cold Fusion The Following song is 100% generated by AI [Music] abusion where curiosity ignites and Minds take [Music] life a better future can help us make a better future everything you hear is all artificially generated by AI generated I mean the lyrics which was chat gbt the Rhythm the vocals to get this output all I did was type in a text prompt of what I wanted there are a few revisions but still the result is honestly insane and if you think that's a one-off here's another example I wanted a classical piece so I type this [Music] how about some guitar driven stuff so you cra a wild ride awesome [Music] [Applause] R&B our souls please remember me [Music] awesome UK [Music] [Applause] garage all very impressive even the album covers were made using Ai and that's just to let you know what we're dealing with here as some of you know I've been a musician for the better part of 20 years and a producer for a decade if you've watched a cold fusion video before you've heard some of my music so being a technology Enthusiast as well a do have mixed feelings about this AI has finally come to something very close to my heart we've seen AI voice swapping that could get artists to perform songs in different genres but this is something totally different with this latest generation of AI music it's not just the vocals but the backing music which is artificial which in my view is much more difficult for AI to do coherently what you just heard previously was from a platform called udio and it was created by a group of X deepmind Engineers it's a watershed moment for AI music and they're not the only ones a few months ago there was a similar application called sunno AI in a recent video I looked at the AI deception and it was about how some companies are lying and over promising the capabilities of AI but on the flip side I said that there's also some companies that are obfuscating the true extent of AI replacing jobs today's episode falls in the second category first was images graphic design writing videos and now finally music it's clear that AI music systems will will eventually impact those in the music industry but how in this episode we explore AI music apps what they're all about how it works its criticisms and how AI could disrupt the music industry I also had the opportunity to speak to the amazing musician and YouTuber Rick Bardo to get his thoughts on this and I also spoke to Taran Southern who's considered to be one of the first artists to commercially release music using AI all right so on the count of three let's go you are watching to Fusion TV quote creating music is an inherently complex process but we're streamlining that making it more accessible and intuitive than ever before end quote that's from son's co-founder Mikey shaan and it just about summarizes what these AI companies are trying to achieve in fact the premise of sunno and udio is basically the same now anybody can create music even though they have no prior musical knowledge just type a text prompt and Away you go users can provide subjects instruments feelings and or their custom lyrics and in just a minute a track is ready to play 30 Seconds for udio and 2 minutes for sunno both platforms can extend tracks and provide different variations and both are free to use right now some say that Yo's first version is already pretty good better than Sun's version 3 I've used both of them to gain Insight but I ended up liking y's output more personally because it was cleaner and possessed a better understanding of less typical genres so even though the examples I've shown you and will show you are through using yio keep in mind that a lot of the content in this video also applies to sunno sunno version 3 is now available to everyone including those on the free plan it is the best AI music generator by far and it just got even better with V3 a completely free AI music generator launched called udio people have been saying this is the chat GPT moment for AI music or calling it The Sora of Music a lot of the time people massively overhype new AI launches that's how everyone gets clicks but there's some truth to this one to understand just how revolutionary this Tech is it helps to know how modern music is made before AI the first step is to come up with a guitar refi I made this one a few years ago when marking around on my acoustic guitar but for the final track I used a Gretch electric guitar with a delay in Reverb pedal and a distortion pedal I was going for a calm 34 Walt time signature with a post Rock feel [Music] after I recorded this I hopped into Ableton and built some Bas and drums and atmosphere around the guitar ref even for this process every step involves many substeps including eqing compressing adding effects and just listening to the components in isolation and [Music] together as the song is being created you'll definitely come across some production problems where frequencies Clash solving these issues are just a part of the nature of mixing sounds together finally I added some vocals and then arranged the track so in total a finished song can take days or even weeks to complete and sometimes the songs just don't work out so you have to try again but with all of that effort exploring the Sonic stage and pulling a song out of the ether is half the fun but now in contrast with AI all people have to do is type in some words and then get a song I hope you can see the difference clearly now here's some observations that I found with yudo it includes production techniques like side chaining tape effects Reverb and delay on vocals in appropriate areas in some outputs the vocals seem extremely real there's also harmonies in there and even for electric guitar it mimics the sounds of different pickups all just so insane some weaknesses include sometimes it messes up big time and I'll show a couple of examples of [Music] that it's very limited in flexibility once you get an output you can't really change it there's low Fidelity on some tracks and it has some weaknesses in some genres such as UK jungle okay so what's the story behind these applications sunno is a venture-backed company founded in late 2023 all four of the founders previously worked at Keno an aite Tech startup for financial data which was later acquired by S&P Global sunno even partnered with Microsoft co-pilot and is up to version 3 like sunno udio was founded in late 2023 but the company only recently went out of stealth mode and made their application public a couple of weeks ago it was founded by an ensemble of former researchers from Google Deep Mind and has the financial backing of popular Silicon Valley venture capital firm and seen hwart and also from musicians like common and Will i Am as you've seen well heard the output is a rich combination of all sorts of instruments so the training data must have been significant and that leads us to the next section how do these applications work well we'll dive in in a bit but first let's rewind the tape a bit this isn't strictly the first time this has been tried making music with the aid of computers goes back to 1957 lejaren Hiller and Leonard isacson would compose a piece called the iliac Suite this piece of music is often considered to be the first created with the aid of a computer there were notable efforts in the 1980s to point computers into a new Direction generative music in 1984 scientist and composer David cope created Emmy which stands for experiments in musical intelligence it was an interactive software tool that could generate music in the styles of different composers from short pieces to entire operas and I said you know what if I could create a program that would create B carals and cope and Stravinsky and Mozart and anybody and the only way I could think of doing that was to create a program that was built around a database of Music let's say I had a database of Bach Corrals all 371 of them and he had a little program that was sat on top of this smallest program I could make that would analyze that music and then create a new Bo Corral that wasn't one of the B Corrals that was in the database but sounded like them all and some of the outputs had been used in commercial settings over the years there's a very interesting story regarding Emmy and I'll get into that later in the episode so stick around for that and then there was the computer accompaniment system in 1985 it used algorithms to generate a complimentary music track based on a user's live input the computer accompanyment system is based on a very robust pattern matcher that compares the notes of the live performance as I play them to the notes that are stored in the score even David Bowie developed a tool in the '90s it was called the verbasizer a digital approach to lyrical writing well that's all well and good you might be thinking but what about the modern stuff well as you probably know from 2012 the world switched from hard-coded narrow algorithms to neural networks and with that the modern AI boom had officially started but the the question has to be asked how are neuron Nets being used in music Generation Well in 2016 Google's Project magenta turned a few heads when they released an AI generated piano piece made by Deep learning algorithms ultimately the next big step was to generate music just based on a simple text query in natural human language Google's music LM did just that and honestly it was impressive for the time but far from perfect this was followed by open ai's jukebox stable audio adobe's music generation gen and YouTube's dream track now these were all Valiant efforts but they all had the same problem they were ultimately fairly limited they worked to a degree but more often than not the composition sounded rigid and just not very human to put it bluntly but even with such limitations there were some trying to push the boundaries Taran Southern is often considered to be one of the first musicians to use modern AI to release commercial music and this was all the way back in 2017 if you can believe it I asked her all about it I was having trouble finding really high quality music for the documentary that was relatively inexpensive and so I was looking for alternate options and I came up upon this article in the New York Times that was looking at artificial intelligence as a way to compose music and I thought well that's really interesting so I started experimenting with it was blown away by what was possible you know this was even before llms hit the scene and I think where we were at with music creation and AI back in in 2017 was actually pretty far along um and so I was so excited by what I was hearing that I just decided to create an entire experiment out of out of the project and made an album using I think four different AI Technologies keep in mind that what Taran did back in 2017 was quite different to what's going on today there was still a lot of work that had to go into the compositions but now anyone can type in some text and get a decent output so how does it all work what's the tech behind it like most modern generative AI applications for example llms these systems use vast amounts of data to understand patterns and then create an output based on the user's input for chat GPT the output is text and for these AIS its original songs and lyrics we'll touch on the ethical concerns a bit later but you can probably realize something a large language model like chat GPT for example generates text by predicting the next word in a text sequence but composing music is significantly more complex due to a lot of variables instrument tone Tempo Rhythm sound design choices volume of different components compression and EQ are just some of the variables that has to be considered in addition the system must understand what the user wants and how that relates to genres and particular sounds in a coherent yet pleasing way not easy by any means but what about the audio synthesis itself many point to audio diffusion as the secret Source behind generative music in simple terms it's is the process of adding and removing noise from a signal until you get the desired output we first saw similar methods used in images where image diffusion starts off with random noise which is then refined to a final image based on the interpretation of the prompt audio diffusion Works in a similar way this is a very high level breakdown just to save time but if you want to get into the depths of the technicalities I'll leave a link to a few articles in the source document in the video description as always if you've been following the generative AI space for the last 2 years you already knew this part of the video was coming open AI CTO Mira morati was under public fire following an interview about the training data of their video generation tool Sora when asked if the impressive video output from Sora was the result of data trained on videos from YouTube Facebook and Instagram she said that she quote was not sure about that to many online the answer was suspicious with only a few weeks since the launch of yudo and Soo version 3 there's a question that has to be asked were these systems trained on copyrighted material one user set out to investigate they ran their test by entering in similar lyrics to the song Dancing Queen by EBA in the prompt they had no direct mention of the band or the song but the outputs were very close to the original including the basic Melody Rhythm and even the Cadence of the vocals now even human musicians are continuously accused and taken to court for a lot less as the user later point points out it's not impossible to achieve these results without the original song being present in the data set but the similarity is striking this result was also solidified by other experiments carried out by the user for obvious reasons I can't play any of the copyrighted music here to show you the comparisons but as always I'll link the original article in the show notes but here's the thing it seems like the founders already knew that this was coming and they accepted it as part and parcel of the new AI startup culture anonio Rodriguez one of sunno ai's investors told Rolling Stone that he was aware of potential lawsuits from record labels but he still decided to invest into the company sunno AI has stated that it's having regular conversations with major record labels and is committed to respecting the work of artists if that's actually true is questionable especially when a few weeks ago over 200 plus artists wrote an open letter to AI developers and tech companies to quote cease the use of AI to infringe upon and devalue the rights of human artist end quote the letter was signed by some big names old and new from Bon joy to Billy eish and Metro boomman all that begs the question what does this mean for the future of the music industry one way to think of this AI in the broadest sense is like an outof control Central Bank monetary policy is the equivalent of printing money leading to inflation but this time it's diluting and devaluing the supply of Music instead of money if you're not the top 1% of successful musicians the music industry is already a hard place to make a consistent and stable income from Tik Tok transforming how songs are consumed and discovered to digital music platforms relationships with musicians the industry is already difficult but now there's an unlimited supply of AI music coming right at you like a tsunami and it's going to be hard to stay afloat if you're a musician who exclusively make stock music or royalty-free sounds for commercial purposes this all isn't good news but what about other musicians I sat down with Rick biard to discuss what this means for the future so so so do you think AI will completely replace musicians one day at all or do you think that that's not going to happen no I don't think so no people have too much fun playing real instruments I'm not going to stop playing just because there's AI guitar things say Spotify in the future they have their AI versions of songs and then you have people using the models that already exist to make their own music and uploading that to Spotify what do you think that would do to the potential income of the real artists who are you know making that and composing their own music oh definitely have an impact in it yes it's just deluding you know because there's no way real artists could compete with generative Creations right I mean how many things can you can you put up there in a day how many things can you generate be very difficult to to compete with that this is an interesting question do you think that one day we'll actually have a number one AI Billboard song yes probably 2 years from now okay you heard it here fast everyone there will be a lot of news stories about it and people will say boy I really like this and then they'll just create more and there will be more uh I can see a time when you know nine out of the top 10 songs are AI generated wow in the within 10 years I said in one of my videos a year and a half ago I think it was or so ago that people won't care if it's AI if they like the song and I firmly believe that it's just a matter of how people are being compensated now are you excited about any aspect of this at all totally excited I mean I I think that the jobs that can be made easier can mastering be done better by AI probably can mixing be done better by AI probably there's a lot of things vocal editing picking between takes at least doing rough edits rough edits for YouTube videos yes I'm excited about it you know people when drum machines came around in the 80s oh they're going to take drum you know drummers jobs away then drummers back in the 80s emulated drum machines and they play drum machine fills and they do all these same kind of fills and then people would bring in drummer the Drum Real drummers to emulate drum machines they'd have their things program do you think becoming a professional musician or even surviving as one or thriving as one would be more difficult because of AI in that case not sure about that I don't know if if uh becoming a professional musician will be affected by it I think people enjoy playing music and uh regardless of whether there are AI versions out there that people are listening to you know the rise of autotune and things like that where have a synthetic sound enabled these programs I mean once you have a pitch corrected voice how far of a stretch is it from from an AI created voice that actually has tuning imperfections in it like a real person that's not that different honestly how many things are actually generated by computers grab some samples off splice you create the high hat track with an 808 high hat and you got your you know you got your kick and you're making a hip-hop tune and then you grab you know some keyboard sounds and you don't even know how to play the keyboards and you hold down some things and you you know you get some samples from here and there you put them in and then you create your vocal over that you autotune and then you move notes around it's you know it's pretty synthetic at that point you know what's the difference between that and AI the current wave of AI including music generators is made possible by neural networks but what do they do and how do they work fortunately there's a fun and easy way to learn about that brilliant I've talked about how I've used Brilliance courses on artificial neural networks before and for good reason but they also have great interactive stem courses in anything from from maths to computer science and general science it's convenient because you can learn wherever you like whenever you like and at your own pace there's also follow-up questions to make sure you've digested what you've learned so whether you just want to brush up on your learning or need a refresher for your career brilliant has you covered you can get started free for 30 days and for cold fusion viewers brilliant is offering 20% off an annual plan visit the URL brilliant.org coldfusion to get started okay so back to the video last year a US federal judge ruled that AI artwork can't be copyrighted as these apps gain popularity we need to protect artists but how this is done and how this will be implemented is still up for debate things are uncertain but one thing is for sure we're heading into a new era of copyright legislation around art and artificial intelligence if you're more interested in the law of copyright and artificial intelligence check out my conversation with Devon from legal eagle in the future we're going to be inundated with AI music but at that stage live music performed by humans will become increasingly valuable but another thing that I'm worried about is another form of AI fatigue where any amazing human-made music that you hear could be diminished in a few years because those that hear it could just think that it could be AI generated as for me personally I think music AI generators could make for a great sampling tool for example I turned that classical piece generated by AI at the beginning of the video into a full track I'll play part of it as an outro to this episode but I'll leave a link for the the full version below but in saying that I can't help but feel a little sense of loss that joy that comes from having an idea in your head and turning it into musical form is no longer strictly human Music Creation is a personal journey of Discovery and it's unsettling that the art form is changing but on the flip side I can understand how freeing this is for those who have no musical knowledge and just want to create something I'm not blind to that I know we covered a fair bit in this episode but let me tell you a little story that I think perfectly encapsulates it all we have to go back to 1997 to the University of Oregon where a small audience is patiently waiting to see a battle between a musician and a computer Dr Steven Larson is the musician in question and he also teaches music theory at the University he's ready to go up against a computer to compose a piece of music in the style of the famous Johan Sebastian bck the idea is simple there's going to be three pieces played live one that was composed by the original buck one by Dr lson and one by Emmy the computer generated music which I mentioned at the start of the episode all three entries will be performed live to an audience and they'll have to guess which piece is composed by whom once all the performances ended the audience incorrectly thought that lon's piece was made by the computer and the one composed by Emmy they thought that that was the original bark composition Dr Larson was genuinely upset about this quote bark is absolutely one of my favorite composers my my admiration for his music is deep and Cosmic that people could be duped by a computer program was very disconcerting end quote now you could take that quote put it in any conversation in 2024 about AI music and it would be just as relevant the circumstances have changed but as humans we still perceive art the same David cope the inventor of Emmy once revealed that his artificial program can make quote beautiful music but maybe not profound music and I think I agree with with that ultimately AI Generations are going to get more polished better sounding higher Fidelity and it's all going to be at our fingertips but we have to remember it's the messiness of our human selves that inform the art and that's the part we relate to most what makes us human is that we can listen to music not just hear it and that is where we're at with AI generated music so I hope you enjoyed that episode if you did feel free to subscribe there's plenty of other interesting stuff here on Cold Fusion on science technology in business anyway that's about it from me my name is toogo and you've been watching cold fusion and I'll catch you again soon for the next episode oh yeah and if you want to check out my full interview with Rick Bardo it's on the second podcast Channel I'll leave a link below [Music] [Music] [Music] cold fusion it's new thinking
Info
Channel: ColdFusion
Views: 693,427
Rating: undefined out of 5
Keywords: Coldfusion, TV, Dagogo, Altraide, Technology, Apple, Google, Samsung, Facebook, Tesla
Id: wgvHnp9sbGM
Channel Id: undefined
Length: 25min 46sec (1546 seconds)
Published: Tue Apr 30 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.