Compressive Sensing

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let us out so last time i was trying to talk about uh this idea of ax equal to b and solving over underdetermined systems you can't do it you have an infinite number of solutions or none and yet you hit backslash and you get one how did that work you have to impose a constraint right so just remember in generic data science we do want to come back and solve this and a is going to be oftentimes tall and skinny short and fat so over determined there are no solutions underdetermined infinite solutions and yet i still hit backslash or i do something i give you a solution how does this work how it works is just simply this two options subject to so when i'm in underdetermined case i just simply say all right fine there's an infinite number give me one that satisfies my subject to line and oftentimes the subject to line here is just give me the solution with the smallest l2 norm or give me the solution for the smallest l1 norm and this one here is going to be important for us when we talk about because it's going to promote sparsity now that's going to try to give you a solution where most of the values of x are zero that's what the l1 norm does which i totally didn't demonstrate for you last time because of the train wreck but if you look at the notes it's all there okay all right so this is if you're in this underdetermined case put under if you're over determined it's different in an overdetermined case uh hold on sorry this is uh yeah okay fine so okay let me come back to that so this is one possibility of a constraint the other possibility if you're actually if you're underdetermined you're underdetermined right infinite number of solutions okay right so then i pick one of these things to nail it down if you're uh sorry that's over determined if you're under determined i can actually satisfy this thing here and so what i want to do is give you a solution and now the subject-to-line changes so you can say how about if i minimize one of these norms one or two subject to my subject-to line is x equals to b because i can satisfy it in that case right so if under determined system i can satisfy the equations it's just that there's an infinite number of ways to do it so i say satisfy it i don't give me something back small cell and norm okay if you're over determined you can't satisfy it so if you can't satisfy it you just say like well how about i minimize that it's supposed to be zero it's not but i can pick some different norms here one or two so these are the sort of your two canonical cases right so it's not so one way to think about this is just optimization i've written it this way right minimize this or minimize that subject too all of machine learning is an optimization all of it and there's so many different machine learning ways to do machine learning because we just pick out different things to optimize okay so optimization is at the core of everything we would do around this and it's also at the core of the minute you write this down with these types of equations non-square matrices you're optimizing okay when you hit backslash optimization is happening if it's not a square matrix that's i guess my main point everybody okay with that all right now it is kind of nice the backslash right because it does give us a way to not have to think a little bit and in many cases the solution we get from the backslash were over and under determined it's perfectly fine i get a solution i can use it do things with it but now we're starting to get to be we're going to start doing things that are like more customized we want we want to start thinking a little bit about the solutions that we're getting out okay and when we start really thinking about it it starts making us think about oh what's the optimization i should be doing associated with that let me give you an example today and it's really really got relia related to that l1 norm that i was trying to promote last time which promotes sparsely trying to make things zero and here's what i want to do i want to take a signal f there's f as a function of t and of course this is going to be a bunch of discrete points i'm going to be able to measure f every delta t for a bit of time okay so it's a vector now here's the question that comes about and this is an important concept related to this do you know what this actually is this is just a compressor and a decompressor of information part of the reason all this technology works is because every single picture you take gets compressed in here okay this is why you can just take all the pictures you want okay when i grew up we had to actually develop the film so you thought very carefully i don't want that no can you fix your hair before i take the picture because i only get i'm not gonna take one because it's gonna cost me to develop this it's not like i could take five to make sure i get the one right i get one picture i make a count because it costs money to develop film okay now you like picture picture picture but you know why we can do that is because when it gets in here it gets compressed signals are compressed audio is compressed video is compressed and then what happens is it comes in uncompressed and then when you want to actually look at it it uncompresses it on the fly so jpeg which is a lot of images com it's a format it's a compression format so you take your picture and in pixel space it's massive but in wavelet space it's very small okay so it's really interesting right because this camera it's an hd so not even that long ago we didn't have such high quality but now it's got like 2 000 pixels by a thousand i don't know it's like 1800 by 2400 right pretty soon they're gonna have 4k cameras in here and then 8k do you know me in the 4k it's like 24 million pixels because it's an rgb cube do you know how heavy your phone gets when you start loading phones in there that's 24 it's so much data people are going to ah right there okay so you don't store that there's a there's a there's a there's a space in which you store the images which is wavelet space that's what jpeg does okay so the whole idea of compressive sensing and this is we're going to go after today is going to be making use of this idea of compression so here's the idea there exists a basis where i can take that signal and compress it the abstract way to represent that is f the vector by c so these are my bases and what c is is the loadings or coefficients of the basis elements does that make sense so there's a basis i want to put this in to represent the signal so for instance if i draw something like this you might say hey i could probably put that on a fourier basis looks a little oscillatory in fact it is we'll do an example here a lot of audio signals have a lot of oscillatory components and so you can say i can just transform my signal into fourier domain right people have everybody done a fourier transform well if you haven't it's super easy fft and now you're in a frequency domain but what's kind of remarkable about most signals if i put them in the fourier domain it's massively compressed notice here's what it looks like in time and a signal like this in a frequency domain might look something like this it's mostly zeros a few non-zero elements so the idea behind compression is just say like oh well i could represent that signal throw all these away so i just keep these coefficients these these these done so when you want it reconstructed i just give the oh here are the four non-zero coefficients i can build the whole signal back out okay this is my compression space so audio would get put into this fourier domain and all i save is this i save like one percent of the data instead of the actual time choice and then when you want to hear it i say okay i can build that back for you because i know what it looks like in frequency domain build it okay so that's the idea of compression now think about this i'm gonna take a picture what's the first thing i'm gonna do if i take a 4k picture with 24 million pixels the first thing i'm going to do is going to throw away most of the information you know why because i'm going to take the picture it's going to be 24 million pixels it's massive it's going to say okay let me go put it in the wavelet domain i'm going to save about one percent throw away 99 of that data right so the first thing it does is throws most your data in the trash and it can because most of it is either zero or near zero almost zero so you just throw it away okay there's there's some lossless compression formats have you heard of this right versus just compression compression always throws a little bit away right but it it just shrinks your data so the idea of compressive sensing is the following oh so if you're going to throw 99 of my data away to begin with what if i just collected the right information to begin with instead of collecting all this information i collect just what you need to represent that does that make sense i'm not going to throw everything away instead i'm going to come in and say like i know i know this thing i'm taking a picture of or this audio signal i know i can represent with only one percent of the information how about if i just go after that one percent okay to begin with that's compressive sensing i'm going to take a small number of measurements from this and be able to reconstruct the entire thing okay how does it work first step one i have a basis wavelets or fourier i give it a basis set in which the signal is compressible okay wavelets and fourier is super easy to work with we're going to work with for you today what does it mean to compress well it means this matrix c here all right sorry the vector c which tells me how it's loaded the vector c is mostly zeros and like for instance even this signal here look there's like let's just say there's four non-zero components it would mean that if i look at the values of c it'd have four non-zero components everything else is zero okay so notice c times this it's basically selecting a set of columns of the fourier modes right because this is all zeros right and these are like the cosine cosine zero x cosine one x cosine two sorry t cosine nt coefficients in there so it just says oh i can throw most of them away there's just a few i need which few wherever c is non-zero okay so you have this step one there exists a basis in which my signal is compressible okay step two i actually take a massive subsample of that signal so the way i'm going to represent this is b this is phi f so f remember is my signal this is a vector component that i'm going to massively subsample it so what i could do is i could make a sampling matrix that's fee so this would be the same size i'm going to have like say p samples and the non-zero components is going to tell me where to sample here so for instance i could sample here and here and here and here so if these are the four sampling positions i multiply these together it means i'm going to sample this guy this guy somewhere down here and over here does that make sense everything else is zero these are ones so that's a one everything else is zero there so this tells me the actual points i'm going to sample my signal now right away you may be perturbed if you come from electrical engineering background or signal processing background there's this thing called shannon nyquist has anybody heard this how fast do you have to sample to resolve a signal twice the highest frequency why is that so let's talk about shannon nyquist suppose i wanted to resolve that signal how fast would i have to sample shannon nyquist says well you have to like at least do that so that's twice the highest frequency essentially why because i have to like see it go up and down everybody okay with that does that make sense all right have that picture let me move over one board so it's easy for you guys but people at home are panning on that camera and they can't see it all at once they're getting less education than you okay look what i just did like i drew this i have a sample here and here shannon nyquist would say you got no shot at getting this see all that stuff structure there i didn't sample it how am i going to get that out i'm going to reconstruct that signal okay it's a problem so there is an assumption however about shannon nyquist so this is sort of in some sense the idea of what we're going to do and i'm going to show you totally going to nail that it's like magic okay but it also starts to work around this dogma the document here being no no you cannot resolve that fast of an oscillation if you don't sample that fast okay now so what gives what's what's going on with this that compresses something's going to allow us to work this well here's the thing there's one remember constraints and assumptions constraints and assumptions are really important things in everything we do and here is the assumption that we haven't talked about but the assumption over there that already made shannon nyquist assumes that you basically have a broadband signal other words this is the highest frequency and i might have all frequencies in between so my signal is whatever goes from negative omega to omega and includes this is your highest frequency so it says all the frequencies are present and if that's the case you're never going to be able to beat shannon nyquist so that's typically the assumption they don't tell you shannon nyquist when it's stated it's just like well you have to sample twice highest frequency there's a caveat there and the caveat is the assumption being that i have broadband signal what's our assumption our assumption is that the signal is compressible there's a few frequencies that matter this has huge impact i'm going to take a small number of measurements and i'm not trying to recreate a broadband signal i'm trying to recreate a signal that has sparse representation that's what that's the whole thing that allows this thing to happen okay all right questions before we do magic no okay i mean we're gonna try to do magic and hopefully not crash the train it's gonna be awesome i'm glad you came yeah maybe you're the good luck charm for today's workout we'll see all right yeah all right just to confirm nyquist but you have to sample twice the highest frequency so that example so it's just like the peak yeah yeah exactly so you know if you're going to get this structure you have to see it like go up and down in other words in contrast what if i had sampled here and here do you think i could get it i can't how am i going to get everything in between so this is often called aliasing so if you don't get it right it says i don't see any of that what it will see is something that's like oh there's a something like that totally different frequency that's aliasing so the way is if you don't sample at the right rate hey listen you get the wrong thing back okay so this has always set a limit for for sensors and detectors like i have to have to be up there at the shannon nyquist rate to get the frequencies i want if i want to resolve frequencies at this level i have to sample at a certain level otherwise everything higher frequency is aliased but of course most signals are in fact compressible and we're going to exploit that so i made two assumptions step one the i signal has a basis in which it's compressible so this c vector is sparse mostly zeros this is telling me there's a sampling matrix times a function so if i do this this gives me a small vector this is essentially my massive subsample vector so i take a small number of points which is a portion of the signal f okay ready for some linear algebra magic then i just make i was trying to make all this big case to you that's just ax equal to b are you ready for ax equal to b me too thank you for saying yes here it is ready i have a signal actually here's my measurement my measurement which is a small set of measurements of the full signal there's my measurement matrix tells me which one's measuring f but what's f f is can be represented like this so this is this matrix like this this is this basis matrix vector c these two give me my matrix a so b is equal to a c so this is like my x equal to b what do i know about the solution x i'm trying to solve for sparse okay so all i want to do is say oh solve this subject to c being l1 so in other words i can construct my sampling matrix multiply it by my basis to get a here are my measurement locations and you say find the signal for me so i just say solve a x equals b substitute this because i know this is sparse like once i have c then it's easy once i have c i can reconstruct my full signal f right there find c ax equal to b so i'm solving okay just keep that in mind just a x equal to b subject to my subject-to-line my whole point of the lecture on monday was supposed to be that if you use an l1 norm you get sparse solutions which is exactly what i'm looking for okay let's program i think it's i can't okay here's some wood i didn't do that last time maybe it'll work uh that's like fake wood i think so i have to really i'm glad there's something here for me to knock on all right so let's build the signal let's do this compression and we will see how this all works out you do this all right pull my screen up here we go okay here we go let's start programming so i'm gonna make a signal and what this is actually let's go to do time is going to be linear space 0 to an eighth and end points and i'm going to make this is the the a tone on a dial tone so if you have your phone you push a you know okay that was awesome sound did you hear that i just made phone sound well my phone sound the a is is this signal here this is actually the a so what it is it's a combination of sine of 1394 times pi times t okay plus sine 3266 times pi times t that's the a-tone on a phone okay let's plot it uh it's it's basically a beating between two frequencies okay so if i run this you see it all right so that's a lot of data and it's for 1 8 of a second so let's zoom in here nope not rotate zoom and just look at it right speaking of two frequencies there it is they're atone okay so first of all that atone is two frequencies so when i represent it in the fourier domain it's how many frequencies two right essentially that's what i put in but of course i'm gonna fft it and look at this in the frequency domain and let's just see what it looks like because the domain of an eighth of a second has its own fourier modes which don't necessarily match the frequencies i have here okay so let's do that plus like a line with 2 because it doesn't look that good right now all right now i can find the fft of this which is ft i'm going to actually do the discrete cosine transform of f okay and i can plot ft and let's do figure two so i'm going to put in cosine basis okay there you go that signal you fourier transform it and advertised it's two frequencies and it's not quite two frequencies because in fact the frequencies i put in for the a tone don't actually match the cosine frequencies that fit in an eighth of a second but they're close so let's just zoom in on these two frequencies here they are they're mostly two frequencies okay everybody agree with that so you could see that when i take this thing because i have 5 000 points so here this here's the artificial thing about that when i write down a signal draw and i sample it how many measurements i have that's number four emotes regardless of the intrinsic rank of the dynamics remember when you're doing pca type things right you guys have been working on this eigenva there's like an intrinsic rank to things what's the rank well here the rank is just how many points i picked right if i say there's 20 million points it's got that say okay if you're going to do a discrete cosine transform it's going to give me 20 million 40 modes it's two frequencies it doesn't matter it's how many points you have is determines the number of four emotes but it's two okay so in other words this is uh the number four emoji i actually used here is totally artificial and what i really want is there's some low rank structure these two modes i want to capitalize on them okay that's what makes this thing here sparse dominated by a few non-zero components those right there that i'm showing you okay everybody go with that all right so so far so good everything's working 10 lines in right all right i'm a little i'm a little shy on the computer right now i'm just feeling i'm not feeling you know my confidence was shaking i'm gonna get back on the horse all right so let's uh start sub sampling okay so what i've just showed you is a signal and it's for a representation fine uh by the way most signals aren't that sparse okay but most signals are pretty sparse so let's just leave it at that so now what i'm going to do unlike my notes a little bit i'm going to take 500 samples okay so how am i gonna do it well i'm gonna make a measurement make sure matrix and i'm gonna do do you remember the rand perm command yeah i used it before okay so what i'm gonna do is say let's call this xr rand perm uh actually let's call this just temp it's just some temp variable rand perm 5000 so what rand perm 5000 is going to do is going to take numbers from 1 to 5 000 shuffle it randomly okay and what i'm using these 5 000 for is to tell me the points i'm going to measure so what i'm going to do is say all right we've got a random thing so it's called x r x r is going to be equal to um what it's going to be equal to is the first uh uh temp uh the first 500 components of temp so i shuffled it and i say okay go grab the first now that it's shuffled go grab the first 500 and it's going to tell me the measurement locations so what i'm going to do with this is go back to figure one okay actually i'm going to do something more i'm going to make the variable fr and here's what fr is going to be fr is going to be this shoot i should call this tr not x so this is the t locations tr these are my see what i do with that i said okay take those points plot them two and i'm gonna go back to figure one and i'm gonna say hold on to what i had before and now i'm gonna plot tr fr in red circles line with two what you're going to see is there's going to be my measurement locations what's that oh yeah good thank you awesome thank you all right oh oh just happened all right so let's see here oh i see i say hold on okay i see um tr okay let's call this ins what i want is tr is t at the end disease and fr is at tr okay all right there we go here we go now let's just go ahead and look here these are all the measurement locations of that signal so instead of 5 000 locations i have 500. let's go see what this looks like there random measurements locations okay so a couple things to note that's what i'm going to measure and i'm going to ask the following thing this is my magic for today given the red measurements reconstruct in blue now what should be immediately challenging looking is that some of it you look like you could probably do some of it is kind of like seriously you can do that which is take a look right there see these two measurements i have one here here how's it going to know that that happened nyquist would say to you you don't know you're going to be screwed compressive sensing is going to say yeah we're changing the paradigm baby why because i know that that signal is compressible on a basis so i'm not looking for all frequencies i'm just looking for which small number of frequencies are consistent with all the data pull those out reconstruct everything in those and then and then i'm going to go all the way down to so this still looks like okay maybe you believe i can do it maybe not but you know it's maybe a stretch maybe i wouldn't do a very good job then i'm gonna go to this let's go zoom in and then you're going to be thinking that maybe this is really stretching it more and more uh maybe i'm going to be lying to you because it looks harder and harder to do because look at these stretches here that i got to fill in and i only have a measurement here and here how do i know that signal turned three times i never took any measurements in there can you really do that okay all right everybody clear what the task is i give you arrangements the red balls you build the blue you rebuild the whole signal for me okay all right okay with that stated and with us still moving along without derailments yet we're gonna we're gonna keep going now what i need to do is i need to figure out how to build i know where i want to measure now and i know i want to represent it in the fourier domain so i want to understand how generically i could take any signal like a delta function and put it into the fourier domain that's going to figure out my mapping over from frequency space to time space so okay i'm going to make a matrix d which is going to be dct of i and by n okay what am i building building this this is a matrix it tells me i've got a 5 000 components i've got 5 000 40 modes i know that i'm not going to i'm not going to use them all i only know i know i'm going to use only a few of them but i still have to have that basis so i set up the basis that's how i set it up okay so remember here's the picture i want to show you this is the cartoon here's the basis it's big there all these basis elements are determined by the number of points i have okay and then i have my uh here's what's where i'm going to measure so i'm going to build a measurement matrix so i have this times this and then i can do the regression and i'm going to find the c vector okay so i'm just building that okay that's done through the discrete cosine transform because that's in fact the basis i'm going to use and when i measure it when i multiply by fee in other words phi is going to tell me where am i measuring this okay so i'm going to make a matrix a which is d and it's at the locations end okay these two moves are really important let me go back to the board sorry i should just stay up there i just did this multiplication what does this matrix do it pulls out certain columns of this right so like remember this this is going to take a measurement matrix it's going to say i'm going to i'm going to actually subsample this massively at these locations so there's only certain rows that matter or sorry certain columns that matter out of that matrix the fee matrix and others gonna say hey here's what you're measuring here's here here are the here are the things that matter it's wherever you pick the measure that's the only place that can sample the fourier modes okay so it's looking across the fourier modes so when i do this it says all right you just built the basis but i only get to sample it here at these locations that's where you're taking the measurements okay remember end was just some random locations okay you do that you get this okay everybody go with that that's setting up everything i want for us to go forward that is my matrix a and that's why i called it a and all i said i was going to do today was solve a x equal to b okay by the way i actually am really serious about this do not don't ever discount what x ax equal to b can get for you a lot of people want to get fancy with math oh i got it super sophistic no x equal to b will solve 95 of your problems okay just have that in mind don't be all like uppity about your math like well it's too low for me i got to be doing really advanced things x b x equal to b is advanced and it's hard most people don't i myself included i'm not saying if i could rip on my younger self i totally would and said dude you need help you need to know more x equal to b and my younger self no i knew it all especially when i was 18. remember when i was awesome when you knew everything it's only gone downhill since but anyway ax equal to b it's it's kind of it so now let's uh let's start doing this so my structure is over there it says i take a sampling of my real signal what is that the sampling of my wrinkle signals fr that's where i sampled it those are the red dots right what does it say b is equal i i wrote it over there b is equal to a c so i have right here is b okay and if i want to find my signal the c vector let's call that actually x is equal to pseudo inverse a times f r and i have to transpose that this is one way to solve it least square fit i subsample do a least squares okay and what i just got out were a bunch of coefficients and once i have those coefficients i get my signal back just by projecting the amount to the fourier modes so what i'm doing right now as a first step i'm not even going to do sparsity yet i'm just going to say i have a x equal to b what if i just solved it with the backslash or a pseudoinverse what would i get okay so i have that that's what i just did there i just get back to my signal and then compare my signal to what it actually is remember with that okay so i'm going to call this um so that's x and my signal from it let's call it signal one is the dct of this x so now i can do i'm on figure one right now so what i'm gonna do is the following actually i'm gonna make a new figure figure three i'm gonna plot the original signal right which is uh t versus f right and let's make that a uh blue and now i'm gonna actually have this and then also we'll plot t versus sig okay sigma yeah sorry all right so if i did this and i said okay i subsampled i did a least square regression these square fit how well did i do not terribly well the orange is my approximation to the blue and in fact this helps reflect something for you which again how you solve x with b really matters it's not all methods are the same and you can see here you would throw away this method you say like okay clearly this is a stupid thing to do doesn't give me anything like what i'm supposed to be getting what did the least square fit actually do or soon it did a least square fit so when it tried to solve for the frequencies you want to see what it did let's talk about what it did how does least square fit win make everything really small so what it did in the frequency domain is it made everything really small so let's go figure four figure four what i'm going to plot now is the original signal right the the fourier transform of the original signal where is it f f t right okay uh and we'll plot that in blue and then now we'll also plot so those are the fourier coefficients and here's what this pseudo inverse thinks the coefficients are x right that's from here that's what it solved for so actually i think it might be it doesn't actually matter forward and backwards on the dct doesn't matter yeah right for the fft it's i minus i doesn't it doesn't really matter so much all right red is pseudo inverse what did it try to do what did i tell you was going to try to do make everybody super small see it did it it got the best least square possible and it said your signal is made up of all of these fourier components it does not want to load anything up like this because it has to square that that's its error and it says i'll make everybody as small as possible least square wins it gives me crap fit doesn't give me anything that looks like the spectrum it says in fact it's super broadband all frequencies matter they don't okay so now let's do l1 okay this is where okay i'm at the last you see how i saved the train wreck to the end because if i really trek at the train and go oh we're out of time you won't see the death you'll just go home like did he die i don't know it was a cliffhanger of an ending and i'm not sure he died okay so now we're going to replace this here and let's just actually in my notes i do a comparison of all of them let's instead do the following i'm going to do cvx begin this is the convex optimization program and what i'm going to do now is a variable i'm going to solve for that x again size n okay uh tab it in all right so there's variable x minimize the norm of x subject to the one norm oh sorry minimize the norm subject to a times x equals to that f r c the x end okay so i go and solve this thing i say go ahead minimize the solution with the norm it satisfies the conditions because it's massively uh underdetermined so i know i can satisfy it just but give me the l1 norm and then once i get it plot it same as before it takes a little bit longer okay so let's go look hold on where's my complaint figure three yes i complained uh see this is where i don't know what's going on either there we go here we go let's go look here again this should be fine right next okay oh you know i got to do this again doc on it i don't know why i keep i did this yesterday remember that's how the train wreck kind of started is everybody nervous do you feel like you're gonna hit a wall right now me too all right okay now it's working because it's taking a while because now it's actually doing an l1 optimization problem see it's like blah blah blah blah it's trying to find that solution so now it's actually explicitly doing l1 optimization hey by the way why do people like l2 and backslash and pseudo inverses they're wicked fast essentially if you want the least square fit trivial super fast it'll get the solution to you immediately okay we've optimized the hell out of l2 because we've been using l2 since the time of you know pythagoras but also even now rhythmically when you have gauss and these guys they're already committing to l2 and when people start committing to l2 you start thinking about algorithms how do you compute these things and so we've been thinking about that for a long time people are thinking very hard about l1 right now and you see progress and optimization routines that go faster and faster but l1 is slow hold on to it hold on hold on we're going to get there come on come on let's pretend i'm really talking about important things right now on every word i'm saying yeah yeah yeah so you can do this in a complex domain i think in cvx you have to break it up into a real imaginary part and like do it that way so yeah so since we randomly permuted the rows of a we didn't just select rows we also randomly commuted them but then pseudo inverse sort of unpermutes them in some sense does that permutation of the rows you select matter in the l1 norm no so i just did the sub selection and i know how to project back okay here we go here we go oh dudes we got it i think hold on check it out red is my solution blue is the original dudes you see nailed it it went for a sparse solution let's go look at the reconstruction and you can already see stuck the landing okay awesome okay here's your job everybody home the code's right there player this is a really important example notice what i did this is like magic i mean you guys are never impressed with anything i do up here but they're like i'm doing magic shows all the time like i could pull like a unicorn out of that wall you'd be like oh okay but check it out take this thing i did 500 measurements go to 200. it's amazing like when i first saw this i was like i gotta teach this because this is like magic this is just math with a constraint and it allows you to do things you didn't think possible it's telling you if i massively sub sample in my measurement space it's not like i have no hope i have hope because if i know there's a compressible basis i can massively sub-sample an experiment and still build the whole thing out many of you are in that situation with your real data if you have real data you don't get measurements of everything that's bull okay this is a helpful way for you okay sweet see you guys friday
Info
Channel: Nathan Kutz
Views: 3,588
Rating: 4.9694657 out of 5
Keywords: compressive sensing, compressive sampling, kutz
Id: rt5mMEmZHfs
Channel Id: undefined
Length: 51min 23sec (3083 seconds)
Published: Sat Feb 06 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.