Music Synthesis in Python

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the music we just heard this music I generated with Python code and pretty sure that by the end of this talk you'll pretty much know how to do it yourself today we're going to talk about music synthesis with Python music synthesis has been around since the mid 80s but what makes it so special to talk about today is that there are quite a lot of Python packages and models that really mature to the point that are really robust and we can really use them in real time applications we have quite a few of them that that worth talking about and we'll cover most of them my name is Bobby Ellen just a quick brief about myself I studied communications and human-computer interactions for my undergrad first my graduation I became a product manager and worked in the in the Israeli startup scene for about five to six years during this time I realized that I'm not only you know in love with managing the development process but I want to build stuff for for my own purposes and I got familiar with Python and it quickly became my biggest hobby and last year I enrolled in NYU in a program called ITP where I focus mostly on machine learning using information retrieval and about the intersection between these two fields that allow me to build audio applications for musicians for creative purposes so this is what we're going to cover today the first topic is music synthesis and a variety of of Python packages that allow us to create music using just simple lines of code then we're going to talk about a more common and more a more known Python task that is related to audio and music and that audio analysis and we'll wrap up by seeing how we can combine these two fields to build creative applications so let's get right to it we'll start with music synthesis in the first package I want to talk to you about is piyo piyo is a package that was developed by the add Jackson's studio in Montreal absolutely one of the best packages they ever used it's a it's C it's a Python model region in C it it has really amazing out-of-the-box object that you can use to generate music and to build your own compositions to use it as a as a sound engine for games it also does a lot of tasks that relate to music analysis and it has great documentation and you know just to give you and take a demo of of how easy it is to use piyo to generate music I want us to do this alright so once piyo is installed okay we import it and we start a server to start a server means that we have a thread running in the background waiting for us to send information to it and it will you know make sure that whatever we send it will go outside of the speakers okay so we have the server running and what I'm doing here is that I'm setting up I'm basically shaping our signal okay I'm using a square table which means that it's a Python list or a Python matrix that describes how the signal will behave okay then I'm setting up the beat using the out-of-the-box Metro object and we'll start with by I mean think about a beat as for for a computer a bit is is taking a signal and just saying when do we want to play it okay so so we play it every second and we'll use the simple polyphony as possible and we'll set up an envelope just to make it a bit more interesting I'm using cosine table as our amplitude envelope I'm applying everything to be our amplitude envelope I'm using the beat and the envelope the duration of the envelope will be quarter of a second and this is the amplitude and I'm just triggering random MIDI notes okay so it would sound like that okay just a simple beat we can make it more interesting we'll make some play funny to it where if you'll make it an odd number and we'll shorten the beat okay we can for example change the change the MIDI notes to something that is more interesting maybe if you came from the Gameboy generation you probably appreciate that can change the scale thanks let's just stick with a simple low-end beat and you probably love since the way I love them so here I'm just you know again setting up setting up a signal just shaping it to make it you know to help significant timbre so we can identify it sending up it again the tempo here using the Metro object that there's a built-in object they have an LFO object which is really nice and we gonna we're gonna use it just to hear those halation okay off a signal I'm applying again to envelope and I'm using an FM synth just a fabulous object they have out of the box and sounds like that okay increased amplitude so we'll hear it more dramatically you have a can change the frequency of antenna this one to make it softer and this is the LFO here yeah it's just that function that goes and now and one of the really sweet things they have is a sort of object that is actually simulating the 80s in that we used to we used to love so I'm using the sine wave and the saw the super soft object as they call it and it sounds like that that's nice right that's a throwback okay again you can change the frequency and all but let's continue you stop everything just like that you can stop the server entirely yeah alright so that was piyo thank you guys now piyo comes with a built-in GUI component that allows you to build built to build desktop applications over piyo so these are two examples from the add Jax studio themselves sungreen is is this application that allows you to synthesize sound using granular synthesis granular synthesis if you don't know it it's a it's basically taking recording and you know using tiny chunks of audio like that I know 20 millisecond of folio and when you combine them all together to clouds or audio to passes the video you get really nice sounds that you probably won't be able to get otherwise the other one is Cecilia Cecilia is you know what Cecilia actually gives you everything that Pilar can give you on a desktop application they just want to build something that will allow you to experiment with piyo so if you think about using piyo and you don't want to go deep into the code or the documentation Cecilia is a really you know nice way to go to get familiar with it so just to see me just to summarize things oh sorry okay so just to summarize things basically piyo gives you an out-of-the-box DSP functions out-of-the-box terms of objects that you can really use to synthesize music it has fabulous documentation it's open source and they have they have a mailing list you can just sign up to and their response will they respond really quickly to any question I'll post a link on the repository of the talk so you will be able to investigate it yourself and another genre of music synthesis software you can use is audio engines audio engines are essentially a pack of software written originally in see that they have Python API we can use so we don't need to learn the syntax of these audio ranges we can use our Python skills to use them okay the first one is that's my personal favorite was developed originally in 85 at MIT written in C it doesn't come with a built-in API it used to come with a built-in API but it was deprecated for some unknown reason and currently the community built their own API which is almost similar to the original one and it works really well also supports the the OSC protocol they always see is a protocol the open sound control protocol is a protocol allows communication between music software's and between the controller's of music stuff like that so if you don't want to learn their own API you can use the oversee for that another maybe that's the most popular engine out there super collider was developed in 96 sorry it has it has the SC package that is it supports only python 2.2 point 7 by looking at it i have to say that upgrading it to to supporting threes not a lot of work but currently doesn't support 103 but it's a very stable and and well-documented package you can also use fox dot instead of you know again it's know if learning the the supercollider syntax you can use fox dot and other fabulous python package to control super collider is your sound engine also supports the OSC and the most recent one is chuck was developed in 2003 at priest and written in c or like if flavor see i would say the the problem with check that it doesn't have a stable or at least a fully functional python api currently at least not that I know of I the one that I used to use is becoming more and more buggy and I'm pretty sure it will be fixed but at least now I cannot recommend you know taking chuck out of the shelf if you haven't used any of these audio inches before ok so I want to show you a demo of super collider as the sound engine and how do i code in Python using fox dot as its API alright yeah so I'm launching Fox the Fox that comes with with the built-in interactive environment and its own interpreter the ones first thing we're gonna use just the built-in stuff just do just to make sure that that everything is clear I'm setting up a new player okay it's a synth and the first thing I love to do is to create a baseline okay so I'm using the built-in bass they just make some arbitrary numbers here let's make the duration four seconds the amplitude one will sound like that all right that's pretty boring one cool thing that I love to do is do supposed to create chords so they would sound like that that's a bit more interesting and we can combine that with lists something like that so now we'll get something that is a bit more interesting answer it okay [Music] that's good another synth and this one is my personal favorite this is the the am bassoon see how easy it is just to create you know Tuesday out of the shelf objects just to get familiar with but what's possible so reading the premise that it can be synth let's give it again just a random number to begin with what if I along the way let's make the duration one and empty do one [Music] okay let's make that they swing little lower labels are here to ambient okay all right again let's just make it it beats more interesting and use lists for different notes and what's sweet about that is that we can also change the duration of different notes so we can say that the first one would be a full second but the next one will be half a second and half a second we can also change the amplitude for each one separately so we can say okay so that would be quarter of the full amplitude [Music] okay so we got something a bit more interesting and let's create narrow synth and this one is the growth since one of the Cynthia I usually used to create a simple beat it's all well documented in Fox tatsu we don't really need to remember anything [Music] that's a bit aggressive okay let's just add a few notes in a premature it will be sweeter [Music] okay okay let's make the last one this what would be the pets synth usually son really nice [Music] just make it expect weeks is aminos as you can see I'm trying to keep everything and of the frequencies [Music] so that you'd be list okay cool another cool thing is that we can combine tuples and lists to have notes in there Clark [Music] okay peanuts enough can stop everything stopping things is pretty easy stuff at first [Music] or stop and we'll stop the growl cool yeah so there was Fox not as an API for supercollider as its audio engine okay it was to me it was relatively easy alright you know I want to talk about every more common task and that's audio analysis and I know that usually when people think about all the analysis the you know the common packages it comes to mind are you know probably numpy in Syfy and there is a peyote analysis which is really robust and has a lot of built-in algorithms that can give you a lot of information about audio there's also a Sencha an open source package that is really well documented and I can give you a lot of things out of the box but what I really love to use is the Broza package that was originally developed that originally still developed in my you by Brian McPhee and I want to show you a quick demo of the browser just to see how easy it is to get useful information out of audio with few lines of code so once we import libros ah okay I recorded something just for the sake of the demo okay let's just hear it quickly here the repetitions okay repeats itself so the electric guitar and then there is a subtle change okay so we load the audio and you know them usually this is the way people look at audio as a time series of of amplitudes and and just with a single line of code we can compute a chroma gram okay using libros I and this karma gram shows us that I was playing f-sharp and then G sharp and then I was playing B and the repetitions of it and you can see that the different colors here the harmonics of the sound so if you want to classify instruments that's a good way to classify instrument because you can identify the timbre of of the instrument did classify frequencies into speech classes but what I really like to do with the brush size to use its similarity matrices so what I'm doing here is I'm getting the strength of the onset events on the recording onset event is when something dramatically happened during the recording usually when it's dramatic and repeated it's a bit okay of the recording so I'm not only getting the times of these beats I'm getting the strength of each event so I'm getting this strength I'm getting the beats using its built-in now be tracking algorithm and I'm getting the tempo so I know that the tempo of the recording that you just heard is 151 BPM and I'm getting the the beats as a Python list so I can sync them I can actually create I can execute a new the function of the chroma Graham again and to get the kurma Graham information and then to sync the beats with the kernel gun which means they'll get only the information that I need at the time of the beat okay so this is just a common way to reduce information on not your recording and what I do next is that I trying to find similarities within the the sync data that I got and what I see here this this beautiful thing shows me that there is repetitions in the song okay so each one of these passes shows me a repetition within the song so I can say that you know for example on frame 30 happens something that is very identical to what happens on frame five okay and you can see here that it repeats there is a repetition for three like three of partitions and then something different happens and then the repetitions are very similar to what we heard at the beginning okay so this is the type of information I really really like Nick as you can see you can get a lot of it evenly sorry you can get a lot out of it alright so that was the broza and last I want to show you a project that I worked on last year it is called loons and it combines audio analysis and music synthesis what loons does is it captures recording live recording from a player analyzes the recording and generate sound using C sound the audio engine that we talked about that gives the musician the musical context of the original recording so it's not jamming with the musician it gives him the context to keep on developing his musical idea okay it's like having a bass player in your room hearing what you're playing and giving you like you know some hint on what you originally played so you can jam with it okay let's see it on a video I think will be a bit clearer okay so the sound you hear is the default sound of this installation of product [Music] now yet the guy with the beard this is me luster okay yeah things change [Music] okay so what we see here on the right there's a script that runs in the background captures the music recording analyzes it using the pro stock captures the recording using peyote or analyzes it using the pro so this is me playing [Music] analysis happens and then after I realize what is the scale what what notes they played with what will go harmonica li correct with this with this recording I'm sending data to see sound and the other script script on the right is playing something that is correct to the original idea this process can happen time and time again correct itself to send some new data again you can hear that you know the recording that the sound in the background is not it Jamie it's not a jamming type of thing it's not music that you would listen to just music as a tool to develop musical idiots you can see another cycle of it and as you can see in a second the analysis will happen and information will be sent to see sound here the adjustment in the background yeah so that was lence okay so that is it you can get all the information that I just mentioned on the github repository and if you have any questions so if you need more information if you do projects in this field and you want to share them I would love that yeah thank you thank you so much [Applause] yeah if you have any questions feel free yeah yeah yeah so okay so there is no you know machine learning going on there oh yeah it's trivial to think that you know everything that does something really tricky is going through machine learning process but it's not it's just real-time processing of the 1g recording and if I read like if I if I need a machine learning is only because the data is so large and I need to find a good way to iterate on that but the technique here was to reduce the data to make to take for example you know the other samples you can get from a 10-second recording is millions so if you find a good way to really decrease the data you you can do things really fast so so the analysis is that I mean I'm I'm analyzing the recording getting the notes understanding if the notes will a to a specific scale and then after I realized what is the scale and what is the what could be harmonically correct with this scale I'm generating something that is not readable which means that I'm not you know going over the entire scale I'm just throwing note every here and then and trying to make it close to the original recording so it won't be an expansion of the idea what it would be giving you know a frame to the original idea not trying to expand it you know using computational algorithms so Libra so does that it's not I mean it won't give you the progression itself but it would give you enough data so you can understand what is the progression and I believe there are many packages that would do the same they won't give you straight answers like what is the chord progression what is the scale but but it will give you enough data so you can understand that yeah any more questions yeah sure yeah so first of all there are a lot of papers that deal with this type of data and I think that every year they have something new to say but what I use it you know specifically is you know sometimes I believe that there is similarity within the song sometimes I want to identify a repetition but this is a good tool to do that and another thing is that sometimes when I see that the data is not clear at the end of the process I see that the matrix that you know graphically is not something that I can eat rate on and I you know get useful information and it's like a debugging tool for me so I know that there was something bad earlier in the process and I need to refine my algorithm yeah yeah I'm sorry I'm not sure I'm getting it oh yeah I mean there are a lot of you know if I'm answering the question correctly there are a lot of packages allows you know live coding of stuff and also mapping to MIDI controllers so you can you know sometimes I could see people you know mapping a controller that is actually playing stuff and then it just write the beginning of the code a beginning of a function so you can continue with that is this answer your question okay okay yeah I'm sorry I can't take any more questions but I'm here so feel free to talk to me all right thanks guys [Applause]
Info
Channel: PyGotham 2017
Views: 40,102
Rating: undefined out of 5
Keywords:
Id: ROlkhVs15AM
Channel Id: undefined
Length: 26min 45sec (1605 seconds)
Published: Sat Oct 21 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.