Coding Challenge #104: Linear Regression with TensorFlow.js

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello you are here watching another coding challenge and I you know this coding challenge maybe this should just fit in being one of my tutorial videos but I'm gonna make it a coding challenge because I'm gonna attempt to do it in one video and what I'm doing is recreating something that I've done before in some of my machine learning tutorials and it was it suggested the Arnoff what was suggested exactly but a twitter user a cow's tube old pod car apologies if i pronouncing the name incorrectly created this interactive simulation of linear regression using tensor flow j s and so this is very similar to something that i've done previously right i have this video linear regression with gradient descent where i just did this with plain JavaScript and then you could also look at this other video which i go through the mathematics of gradient descent a little bit but here's the thing going through the mathematics making this video where i implement the mathematics in javascript while useful and perhaps background for this video one of the exciting things about doing this with tensorflow das is tensor photo jess has a nice API for optimizing loss functions with the gradient descent algorithm built into it so I could just do things so let's make a light to come back here but let's make a list alright so first of all what is linear regression anyway so let's say we have a space and I drew this as like a canvas but really I should be talking just about a generic kind of two dimensional Cartesian plane in that plane there are a lot of they're a bunch of points the idea of linear regression is to figure out can we fit oh this is a time for another colored marker can we fit a line into this two dimensional space that approximates all of these points as best we can and I can visually just kind of make myself do this like this so I can eyeball and say this line kind of gets closed what we're trying to do is minimize all of these this is the most beautiful diagram I've ever made all of these distances of all of the points the line the idea here then is that we can make some predictions right if this data if this x-axis you know represents height we might predict on the y-axis weight right you can think of kinds of data sets simple 2d data sets where there's a linear relationship between the 1:1 field of data and another field of data so if we pick a new height we can kind of make a guess approximately what that weight is gonna be that's the idea of linear regression it's incredibly simple a lot of data isn't two-dimensional a lot of data doesn't fit a line you know maybe a curve fits it better and this is more complex scenarios will come as we move forward and make more scenarios with complex polynomial equations our neural network based learning and other types of machine learning algorithms but this is a good place for us to start so what do we need we need a data set so we need a set of X's and Y's this is the data set right we need X's and Y's and I'm gonna create that data set through interactive clicking Interactive clicking is the way I'm gonna create that data set every with the mouse I need to have something called a loss function the loss function is a way of computing the error and there are a bunch of different loss functions and we'll see these as we as I use tensorflow HS in more tutorials I can select different kinds of loss functions for different scenarios but in this scenario I'm going to use a simple basic one which I believe is called root mean squared error did I say that correctly is that the right name of it but the idea is that I want to look at all of those distances okay I'm back as I started talking about the loss function I realized I really didn't draw I'm not actually looking for the the distance from the point to that line which would be perpendicular I'm looking for this vertical distance which is so this is what I'm trying to minimize right I'm trying to minimize and get a line that has and this that is the least dilute the sum of all these distances is the smallest number minimizing the loss so I have a loss function I need that I also need in tensorflow digest something called an optimizer and the optimizer is the thing that allows me to minimize the loss function and in order to do that I also need to have a learning rate so these are all I've actually missed something and very important here but these are all the pieces I need the data I need to define a loss function I need to define an optimizer I say hey optimizer minimize the loss function with this learning rate so keep tweaking the parameters tweaking the parameters so that's the thing I forgot what are those parameters well the formula for a line is y equals MX plus B M is often referred to as the slope B as the back that looked it up M is the slope and B referred to as the y-intercept kind of like bias by the way if you've watched them on other neural network tutorials this is like the thing we're doing with all the neurons oh it's also connected but we're just living in this very simple place so I need these parameters I need these variables because that's what's going to allow me to create the predictions that are on the line to compare with the actual points and compute the loss minimize it tweaking these values so tweak these values minimizing the loss this is what we're doing and I've done this before in great detail this is gonna be in less detail because tension flow digest is gonna do a lot of this for us the thing that's a little extra complicated is we can't just work with arrays of numbers and variables in the way that we're used to in JavaScript and so this is what brings me to if you haven't looked at these particular videos that I've made already what's a tensor flow tensor what's a variable what's in operation how does the memory management stuff this is stuff we're gonna have to lean on while I build this exam and this should be by the way an actual practical example of where I need a TF variable so I kind of in this video explained what a TF variable is it just kind of moved on and didn't use it for anything so hopefully this will show us that all right how about we write some code now so actually I appeared over here for a second because instead of I did make a little mistake here I mean root mean squared is a perfectly legitimate loss function but most linear regression with gradient descent examples will not bother with the root and the root refers to square root we just want the mean squared error which means like if I say that this value is y and this value is like the guests right I the error is guess minus y and squared and if I do that for all of these that's the mean an average them if the mean so I can really sum them or mean them well it's gonna take care of that for us so we're just gonna use the mean squared error but that's the idea we take the differences the reason why we have to square it it will for a variety of reasons one it has to do with the derivative stuff that's in my other videos but also just because positive or negative with the it's the distance that the size of the error whether it's up or down which is key all right so here's the kind of code we're gonna start with I'm using p5 so that I can draw stuff I'm making a canvas and the background is zero which means it's black and this is what I have so far so let's look at our list over here and let's first add the data set the X's and Y's so this is going to be easy because I just want to have X's be an array wise also be an array so and then whenever I click the mouse I want to say oh you know what I could make those vectors let's make them separate arrays I think we're gonna actually I know we're gonna we're gonna want to do that for a variety of reasons we're gonna keep those at separate arrays so every time I click the mouse I'm going to say X's push Mouse X Y's push Mouse Y ah okay here's this here's a little thing so this is our canvas right this is my drawing of the canvas I know I'm having like a DejaVu thing I totally talked about this on the other video the width is like 400 the height is 400 but I really want to think of this as I really want to think of zero zero down here and maybe one being over here oh and normalize everything between zero and one everything's just gonna work better if we do that so with Y pointing up so I'm gonna do a mapping so every Y value that's pixel value between zero and height I'm gonna map between 1 and 0 and every x value that's between 0 and width I'm going to map between 0 and 1 so let's do that so I'm going to say let X equal map mouse X which goes between 0 and width 2 between 0 and 1 let Y which is Mouse y between 0 and height and have that go between 1 and 0 and then push X and push Y so the other thing I want to do is I just want to UM you know I'm gonna add a draw loop and I want to draw all those points so I'm also now gonna say stroke 255 stroke wait for 4 let I equal what huh let a equal 0 I is less than X dot link I plus plus and those are actually called X's X's dot length and what I'm gonna do is I'm gonna say let pixel X equal map I really should make just like I probably do this a lot so I should probably make a function that just like normalize and unnormalized or denormalize pix px equals map X's index I which goes between 0 and 1 back to 0 and width so this is the reverse py which maps Y which goes between 0 & 1 to height comma 0 and then I want to say a point P X py so I haven't done any I'm even I'm not even using tensorflow j s yet I'm just kind of doing the stuff with p5 to draw things so let's see if I'm getting the results that I want which is whenever I click I get the points they're perfect and I kind of want to see them a bit more that's like really make it bigger great that's like too big okay great so we can see those those are the points I'm clicking on okay so what's next I need a loss function I need an optimizer ah oh I need these let's make these so cuz I'm looking for somewhere where I need to get some tensor flow das stuff working so what I need is I need to have M and B so let's figure that out so I'm going to create an M and a B and I'm not going to initialize them and set up here and I probably should be using Const in various places here just protect myself from reassigning something by accident but I'm gonna be loosey-goosey and just use let you know these could be Const but anyway I'm not gonna get into the whole let verses Const think it makes me crazy I'm gonna say up here M equals TF scalar random 1 so I'm gonna use the p5 random function to give me a random number between 0 1 cuz I gotta start somewhere so this is kind of like initializing the weights of a neural network there is no neural network I'm just doing I'm just kind of optimizing this function y equals MX plus B but those are like weights and then B so I'm gonna initialize them randomly and scaler because it's a single number so go back to my tensorflow J ass intro videos and you'll see then B I'm gonna say the same thing ah but these are the things that have to change right the data never changes it's sort of fixed M and B change over time I need to tweak those which means they have to be variable they have to be able to change which means when I over here I think what I write is TF variable I wrap them in the TF variable so now I have M and B as TF variable right isn't it crazy like you see this kind of code you're like that looks like the craziest scariest thing but you realize like it's just like make a number and because we're this like kind of lower level working on the GPU land I've got to be very like specific like this is a single number and it's going to be variable but really it's just a random number okay now what do we need to do we need to write I don't think I actually said this but I need to write a function called predict maybe which takes in all of the X's just the X's and gives me the Y predictions based on where the line is right because I need to compare the Y predictions to the actual Y values to get the mean squared error so let's write that function so I'm just gonna I'm putting these in like arbitrary places but I'm gonna write a function called predict and they're what I need to do is I need to have some X's and I need to return some wise I think that's the idea right yes I'm gonna so I don't want to just predict one value I want to predict a bunch so the X's here's the thing so if I call this function the X is if they're just a plain array I need to make it into a tensor so I'm gonna call it Const TF X's that might be a bad is tensor and this is a one D tensor tensor 1d OTF tensor 1d I think this will do it X's right and you turn it into a tensor and then I need to have the formula for a line so I need to say what it which is y equals MX plus B so what I would be doing is I would be saying TF X's multiplied is it mu L or nu L T multiplied by M plus B right this is the idea if I'm getting just a plain array of numbers I turned them into a tensor then I apply the formula for a line and these are the predictions the wise I guess if I don't like my naming here I'm just gonna call this X and maybe I'll call this X's I don't know I have to think about I'll come back to this later okay so I have that now let's go back and look at the things that I need so I have this predict function I have a data set ah I need a loss function you need a loss function and I need a let's before we do the loss function let's create the optimizer and the learning rate so this is what's wonderful about working with tensorflow j/s when I say make the optimizer I just mean make it TF optimizer like it exists it'll do this math for us so let's go to the this is not something that I covered in my other videos so let's go look for optimizer and I want an optimizer now there's all these different kinds of optimizers SGD stochastic gradient descent this means the idea of slowly adjusting mnb to to minimize the loss function and I've covered this in more detail in the other videos so so I'm gonna click on that and I'm gonna look here and this is basically what I need to do I we got the code right here look there's even a look at this oh my goodness there's like some stuff here we could really use so I'm gonna grab this and I'm gonna put this up here so I want to learning rate and I'm gonna I'm gonna have a much bigger learning write to start with and I want an optimizer so now I have a learning rate and an optimizer and the optimizer is doing stochastic gradient descent so I have learning rate optimizer now I need that loss function I need the loss function okay the loss function is something I'm gonna write loss and actually let's go back to here so look at this so this is the fancy es6 way of writing a function but I'm gonna write it in a a less fancy way and I'm gonna do this so what I want is I need the loss function I have some predictions and I have some labels so these are the predictions are the y-values I'm getting from the predict function the labels are the actual Y values that are part of this and by the way I'm gonna have to do memory management don't worry I'm if you're screaming at me that I haven't worked about memory management I'm gonna do that just gonna do that later so what I want to do is say return the predictions minus the labels that makes sense right because I said here when I said mean squared error is the predictions like the guess - the labels which is the actual Y squared and so predictions - the labels squared and then take the mean of them look at this all of these mathematical operations are inside of tensorflow - yes and you can chain them so predictions is a tensor labels is a tensor all of these remember they're just gonna keep producing new tensors and I'm gonna have to tidy and clean all this stuff up or memory management but again I'll worry about that later so now I have the loss function alright well what is it that I want to do every time so let's say I think I'm actually like I have everything I have the loss function I have the data I have the optimizer I have a predict function I have a learning rate I can minimize Y oh this I haven't done so the training the actual training what does it mean to train it to train it means minimize the loss function with the optimizer and adjusting m and B based on that all right so let's see if we can make that I have a feeling that was in that page that I went to so maybe I could just copy it from there I'm kind of this is like very totally is I'm gonna happily cheat here who was seeing that example thank you very much thank you to float ideas documentation and so I'm gonna just put this in draw like every time through draw I'm gonna minimize so let's look at this oh look at this okay so this is a little different so first of all this is using nice fancy es6 arrow notation which I'm somewhat happy about but let me just write a function here called train and the idea of the Train function is to execute the loss with the predictions and the and the actual wise okay so here what I'm really doing is minimizing the Train um that's weird this isn't really no training would be doing this so this is a terrible name for this and actually this is silly for me to even name this it really makes sense for me to just make this an anonymous function and that what I'm minimizing is this right this is what I want to minimize the loss function but if I want to be nice and es6 like with my arrow notation which I think by the fact I'm using tensorflow j/s and you can watch my arrow notation function if this is if this is I can kind of get rid of a lot of the extra stuff here and this should be good so I just want to minimize the loss function now here's the thing these have to be tensors right the loss function requires predictions and labels they have to be tensors and if you remember my X's and Y's aren't tensors when I call to predict function with the X's it gives me back a tensor so that I can't believe I haven't run this code yet it's a terrible thing usually I try to run my code incrementally all the time I guess I forgotten to do that so probably people in the chat are telling me about mistakes I'm making so this is a tensor but this is still a plain array so what I need to do is say constant and I got to rethink the naming maybe something in the chat has an idea for me I think what I actually should do I have an idea permit me a moment of refactoring X valve Y valve so I think what it's not a tensor I'm just going to call it like X underscore valve because that's gonna help me remember so X valve Y valve and then whenever I say and this should be X whenever I say X sry s that's really a tensor I guess I could have done T X s so here what I'm doing is predicting from the X valves and then Y s is TF dot tensor one D Y valve so I need to create that tensor and now I can minimize the loss with predicting from the x-files and the Y valves okay so this is good let's just run this alright I'm gonna I mean I'm from the future different day different clothes I'm braking to this video to mention something really important that I didn't actually mention when I recorded the coding challenge originally what is that optimized function doing how does it actually work and we need to look at the tensor flow test documentation to see so let me I'm bringing my laptop back up here I'm gonna switch over here so I've got the code from the past me in the future this is the part that I'm talking about well how is this going to adjust and then be that those are the these are the parameters the weights that the variables that we need to adjust to minimize that loss function but I'm not anywhere in here saying you know those are the variables to work with well this is part of what tensorflow jeaious does natively the fact that I made these up here TF variables means those are variables that can be adjusted and if we look here at the tensorflow documentation you'll see what does minimize do it executes this function f that is sorry that is this function right this whole function here and by the way the return here is implicit because I'm using the arrow function so it minimizes the output of that function tries to get it lower by computing the gradients with respect to a list of all trainable variables provided by var list guess what I didn't provide a list of trainable variables if no list is provided it defaults to all trainable variables and that's what's going on that's what I did in this coding challenge these are all the trainable variables in my system if I wanted to only use em or only use B I put those in a list so that's all I have to say about that I'm going to fade now back into the other video lower this and back up and it's going to keep going where I D bug and have also two other problems goodbye okay predict dotsub dot squared is not a function at loss at optimizer minimize so what do I have wrong here in my loss function sub labels dot squared dot mean hmm oh you know what it is there's nothing in the arrays at the beginning they have zero things in them so a couple things one is I could put something in it but I think I probably should just say I'm not I shouldn't do only if X dot length is greater than zero so this is definitely do I want to bother with do I want to bother with doing any of this if there's no values in there like calling predicting stuff with an empty array of think it's gonna cause problems that makes sense alright let's try this X is not to find it x valve my naming okay sketch 45 ah this is X valve and this is X valve y valve okay that should be good all right let's try this mmm this is not square I'm being told in the chat breaking news due to deduce that this is actually dot square not dot squared is that right yeah oh it's square a dot square okay there we go okay so things are going and there's no I don't have any like I could look at that's M m m dot print right so you can see it's changing it's actually like training it like the value of M is changing so everything's going and working the problem is I'm not seeing the results let's just check B and I haven't done any memory cleanup so if I say memory dot num tensors is that what it is now what is it again so let's look under memory management memory Oh memory num - oh this TF so I know you can't see this but this is what I want I want to check how much I want to check and see like how if I have to have cleaned up stuff you can see I have 1147 tensor so I need to do the memory cleanup I don't know probably better practice would be for me to clean up as I'm going but I'm kind of gonna clean up at the end all right so let's I just want to click back here for a second oops no I'm just gonna click no loop to shut this off and let's go here so what do I need to do ah I need to visualize what's going on all right so how do I do that so I need to draw a line so the way that I would draw the line is first what I would do is all I really need is to give myself the x value of 0 and the x value of 1 get the two Y values and draw a line between those two points so if I were to say let X equal TF scalar this is silly for me to use the predict function why not why not let's use the predict function TF scales are zero so x 1.is TF scalar 0 y1 equals T F equals predict x1 x2 equals T F scalar 1 Y 2 equals predict X 2 right so this should give me I mean it's a little bit silly for me to not just do this keep an extra copy of like M&B as regular numbers but let's keep going with this will this work is it gonna be able to take a scaler and make a 1d tensor I think so so let me just see here so let me do X 1 dot print Y 1 dot print so let's see that tensor 1 D requires values to be a flat typed array hmm all right so one thing I could do is instead of making it a instead of making it a scalar 10 I can make it a 1 D tensor that's what it wants and do the same thing here and I have to put it in as an array then but it's just one value oh this is so silly why am i doing X 1 I could just do this right X is once again can I use X's yeah yeah yeah so I could do I could just do it with zero and one constant X's and then constant y z-- equals predict X's right so I could have both these points now then let's say X is dot print wise dot print let's look at that let's see if this works predict is not defined because my e key doesn't work I have to type it several times tensor one D requires values to be a flat typed array Oh silly silly me predict doesn't want a tensor oh it wants this so line x equal let me just this is a little bit silly but I'm gonna do this so I'm gonna make the oh oh but I don't need to know the X's I don't need to have the extras of the tensor because yeah sorry everybody there we go my predict function I totally forgot already see this is just there's so many different ways you could do this like I could have I could enforce you to convert to a tensor before you pass it in to predict but I've just a lot of these decisions are the arbitrary so you might have a better way of doing it but so I'm gonna do this so now I have the XS and the Y's and I don't even need to say XS print so we can see okay great so I'm getting these points k week mon in the chat makes an excellent point which is that i should think about actually mapping it between negative one and one with zero zero in the center that's not such a bad idea let me just keep going with this and then i'll maybe i'll change that after the fact because this should work anyway so now here's the thing here's the awkward thing in order for me all i need to do now is basically say this let x1 equal map x is 0 which goes between 0 & 1 1 between 0 and width and this is kind of silly could just multiply it times width but I'm gonna just go with the normalizing well the full normalizing y1 equals map X sorry X 2 which map X's index 1 so this gives me X 1 which is just 0 and with now y1 and y2 I want to map YS the Y value is between and between height and 0 because I'm flipping it the problem is and then I just want to say line x1 y1 x2 y2 so this is really all I need to do right I just want to get this sort of two points on the line and then draw a line between them this is fine because my X's are not tensors and I can use plain numbers right here x1 x2 but my y's and and here but my y's are tensors so for me to be able to I really need to get the values back and a way to do that is is with a function called data so I'm gonna say let's line y equal wise dot data and I'm just gonna say Data Sync right now and let me comment all this out and let me console.log that and see if this comes so there's this is kind of a bad idea for a variety of reasons but I think it's gonna work okay so you can see I'm getting those numbers back as a float array so here's the thing this really requires not a callback but a promise and I'm so happy I just did a whole video series on promises I really should be saying data then and there's even something called TF next frame which allows me to sort of think about the asynchronous nature of pulling the data out of a tensor into a number that I can use it in animation these are key things I'm definitely gonna have to get to them but here's the thing this is just two numbers I think my animation can handle using data sync and maybe somebody from the tensorflow Jazz team is gonna want to say like actually this was not just a bad idea but like a really bad idea I'm not so sure but I think it's gonna let's just get it to work and see if this demonstrates the idea so now I'm gonna call this line why I should be able to say y1 y2 and I should get 0 & 1 from line Y and now we should see we should be done up line Y is not defined we're sketch that J is line 61 I think I just didn't hit save yeah oh I haven't clicked any points Hey look at that oh hey look it's working alright for a couple things number one is let's say strokeweight - and by the way we can now start to play with the learning rate III don't have to clean up the memory stuff I have to look and play with the learning rate like let's make this zero-one so you can see what happens with this lower learning rate I don't know if it's let's see is it really working well I shouldn't use such a low learning rate let's make it point five yeah it's definitely it's definitely happy okay so this is working linear regression created two said but I have a severe problem I am just filling the GPU with tensors and tensors and tensors and tensors and never cleaning them up so now it's my job to go through and find every place I'm creating a tensor and dispose of it so I can use TF tidy to do that automatically or it can just use the actual dispose function which I might be inclined to do it first all right so let's go through so here these I do not I always want to keep them and then be your variables that I need to keep throughout the course of this program loss do I just put tidy in here or should I let's predict so do I put tidy in here do I wrap tidy here what if I just put tidy here like what if I say TF dot tidy and put all of this will this do it and then here I also need to well I'm here maybe what I'm going to do is just dispose these there's no logic to what I'm doing but I'm just gonna dispose these manually oh and that's just the wise right line y is just YS is the only thing that's a tensor here so this should tidy everything but hopefully not the variables that I need to keep rather than individually figuring out what to dispose of and down here I kind of know that this is my only tensor this by the way I should call this like line X just to be consistent with my variable naming you know I'm only using wat that YS and XS notate variable name when I have something that's actually a tensor which helps me remember what to clean up and not let's see if this goes okay it's still running 221 no okay so I better there's fewer tensors but I haven't cleaned up everything so what could I be missing maybe the call to predict wouldn't tidy clean that up alright need to debug this somehow one thing I could do is trucks are commenting stuff out to see like where is the memory leak so one worry I have I really think Lawson predict those functions generate a lot of tensors I believe TF tidy should clean up anything but let's just for the sake of argument comment this out and now of course the learning is no longer happening and what I might as well do is console.log the amount of tensors not have to like ask for it looks um what did I do wrong TF memory numb tensors what is it how come I can't remember what it is numb tensors no yes it's not a function it's just numb tensors sorry everybody okay so and I need another parenthesis here a little digression there all right all right so we can see it's growing so let's keep commenting stuff out to see like what's causing the memory leak let's comment out this whole area down here ah good news everybody the memory leak is in that part let's put this back just to be sure okay ah so the memory leak is definitely down here and I've probably created oh my goodness oh my goodness no I'm not sure well let's put this back in I thought I saw it but then I didn't again so this is a tensor and I'm disposing it Oh predict aha the predict function makes other tensors and predict got cleaned up because it wasn't ID but I'm just manually disposing the Y's down there that's what it is so let me use tidy I guess so let me do this let me put this up here so this is really what I need to tidy so instead of disposing manually I kind of liked disposing things manually the tidy thing kind of freaks me out but the problem with this is I have a scope issue which is that line Y right no matter what I do if I take this out here like this is going to tidy everything so I guess it's not the biggest deal at the moment at least for me to just put everything inside the tidy well that's it let's put everything inside the tidy for right now there's probably a way I could simplify that but this should work let's give this a try there we go there's only ever five tensors all the time so there's no more memory leak five tensors linear regression with gradient descent tensor float is interactive here it is so what's what's left here so number one things that could be improved so I wanted to talk through some things that could be improved but already me I am something the chat made a very good suggestion this is very awkward how I put everything in tidy so unnecessary let me take that out because predict returns something I can actually just put the tidy right here I don't know why I didn't think of that like I can actually just it's only this predict function that so I can actually put the tidy right here and I can use my fan to es6 arrow syntax right and the return is now assumed and then I can just say why is not disposed so this should work right tidies not going to tidy up the this y-value but I can I can dispose that manually once I have the values so I think this will do the trick let me just take a look at this yeah so this I like better and there's probably other styles or ways you can do it the point is you've got to keep track of all the tensors you're making and dispose that okay so I think we're gonna wrap this video up I'm getting all these great suggestions from the chat right you know I could have tidied this the tensors here and predict you know so number one is I'm um this code is gonna get posted to the coding train website and coding challenges make your improvements and add them as community contributions some things that I would love to see visualized graph the lost value I think there's a way to get the loss of sure there's a way to get the lost value out of that function that's one idea K week Mon suggested maybe trying some of the other optimizers so what happens if I go to the tensorflow chess documentation and like just use some of these other optimizers what are they what will they do do you get better or worse results can you make the learning rate somehow interactive adjust the learning rate you know I don't know if you could come up with any really clever visual ideas with this but anyway so but I think I'm good I think I have the basic idea so if you really want to dive as deeply as you can into linear regression with gradient descent you can go back and watch my other videos where I did this with just JavaScript in p5.js now you've seen it with JavaScript p5 GS and tensorflow times yes so I look forward to your feedback and hearing more about it a more tensorflow digest videos to come and I want to get to some actual more practical things that you might want to do for interactive creative arts projects but I'm still in the weeds here of just trying to understand the nuts and bolts of how the library works so I hope you're enjoying that and I look forward to seeing you in future videos [Music] you
Info
Channel: The Coding Train
Views: 48,169
Rating: undefined out of 5
Keywords: tensorflow tutorial, tensorflow tutorial for beginners, tensorflow js, tensorflow basics, tensorflow example, tensor, tensorflow, machine learning, machine learning basics, machine learning tensorflow, tensorflow basics tutorial, JavaScript (Programming Language), programming, daniel shiffman, tutorial, coding, the coding train, nature of code, artificial intelligence, itp nyu, linear regression, linear regression tensorflow, gradient descent, stochastic gradient descent
Id: dLp10CFIvxI
Channel Id: undefined
Length: 43min 44sec (2624 seconds)
Published: Tue May 29 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.