Tesla autonomy neural networks How AI neural networks function in Tesla

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] but if you please talk about your background in a way that is not bashful just telling you it so creative tell what about the secreto yeah and then sure yeah so yeah I think I've been training or let works basically for what is now a decade and these neural networks were not actually really used in the industry until maybe five or six years ago so it's been some time that I've been trained these neural networks and that included you know institutions at Stanford at at opening I at Google and really just training a lot of neural networks not just for images but also for natural language and designing architectures that coupled those two modalities for for my PhD so the computers computer science class oh yeah and at Stanford actually taught the convolutional neural networks class and so I was the primary instructor for that class I actually started the course and designed the entire curriculum so in the beginning it was about 150 students and then it grew to 700 students over the next two or three years so it's a very popular class that's one of the largest classes at Stanford right now so that was also really successful I mean Andres like really one of the best computer vision people in the world arguably the best okay thank you yeah so hello everyone so Pete told you all about the chip that we've designed that runs neural networks in the car my team is responsible for training of the these neural networks and that includes all of data collection from the fleet neural network training and then some of the deployment on to that chip so what do then you know that works do exactly in the car so what we are seeing here is a stream of videos from across the vehicle across the car these are eight cameras that send us videos and then these neural networks are looking at those videos and are processing them and making predictions about what they're seeing and so the some of the things that we're interested in there's some of the things you're seeing on this visualization here are lean line markings other objects the distances to those objects what we call drawable space shown in blue which is where the car is allowed to go and a lot of other predictions like traffic lights traffic signs and so on now for my talk I will talk roughly into in three stages so first I'm going to give you a short primer on neural networks and how they work and how they're trained and I need to do this because I need to explain in the second part why it is such a big deal that we have the fleet and why it's so important and why it's a key enabling factor to really training this in all networks and making them work effectively on the roads and in the third stage I'll talk about a vision and lidar and how we can estimate depth just from vision alone so the core problem that these networks are solving in the car is that a visual recognition so four unites these are very this is a very simple problem you can look at all of these four images and you can see that they contain a cello about an iguana or scissors so this is very simple and effortless for us this is not the case for computers and the reason for that is that these images are to a computer really just a massive grid of pixels and at each pixel you have the brightness value at that point and so instead of just seeing an image the computer really gets a million numbers in a grid that tell you the brightness values at all the positions it may chose if you will it really is the matrix yeah and so we have to go from that grid of pixels and brightness values and to high-level concepts like iguana and so on and as you might imagine this iguana has a certain pattern of brightness values but iguanas actually can take on many appearances so they can be in many different appearances different poses and different brightness conditions against different backgrounds you can have a different crops of that iguana and so we have to be robust across all those conditions and we have to understand that all those different brightness power patterns actually correspond to a Gowanus now the reason you and I are very good at this is because we have a massive neural network inside our heads there's processing those images so light hits the retina travels to the back of your brain to the visual cortex and the original cortex consists of many neurons that are wired together and that are doing all the pattern recognition on top of those images and really over the last I would say about five years the state of the art approaches to processing images using computers have also started to use neural networks but in this case artificial neural networks but these artificial neural networks and this is just a cartoon diagram of it are a very rough mathematical approximation to original cortex will really do have neurons and they are connected together and here I'm only showing three or four neurons in three or four in four layers but a typical neural network will have tens to hundreds of millions of neurons and each neuron will have a thousand connections so these are really large pieces of almost simulated tissue and then what we can do is we can take those neural networks and we can show them images so for example I can feed my iguana into this neural network and the network will make predictions about what it's seen now in the beginning these neural networks are initialized completely randomly so the connection strengths between all those different neurons are completely random and therefore the predictions of that network are also going to be completely random so it might think that you're actually looking at a boat right now and it's very unlikely that this is actually an iguana and during the training during a training process really what we're doing is we know that that's actually in iguana we have a label so what we're doing is we're basically saying we'd like the probability of iguana to be larger for this image and the probability of all the other things to go down and then there's a mathematical process called back propagation stochastic gradient descent that allows us to back propagate that signal through those connections and update every one of those connections sir and update every one of those connections just a little amount and once the update is complete the probability of iguana for this image will go up a little bit so it might become 14% and a property of the other things will go down and of course we don't just do this for this single image we actually have entire large data sets that are labeled so we have lots of images typically you might have millions of images thousands of labels or something like that and you are doing forward backward passes over and over again so you're showing the computer here's an image it has an opinion and then you're saying this is the correct answer and it Tunes itself a little bit you repeat this millions of times and you sometimes you show images the same image to the computer you know hundreds of times as well so the network training typically will take on the order of few hours or a few days depending on how big of a network you're training and that's the process of training a neural network now there's something very unintuitive about the way neural networks work that I have to really get into and that is that they really do require a lot of these examples and they really do start from scratch they know nothing and it's really hard to wrap your head around it around this so as an example here's a cute dog and you probably may not know the breed of this dog but the correct answer is that this is a jab Spaniel now all of us are looking at this and we're seeing Japanese Spaniel we're lucky I got it I understand kind of what this Japanese spaniel looks like and if I show you a few more images of other dogs you can probably pick out other Japanese spaniels here so in particular those three look like a Japanese spaniel and the other ones do not so you can do this very quickly and you need one example but computers do not work like this they actually need a ton of data of Japanese spaniels so this is a grid of Japanese spaniels showing them you need a source of examples showing them in different poses different brightness conditions different backgrounds different crops you really need to teach the computer from all the different angles what this Japanese spaniel looks like and it really requires all that data to get that to work otherwise the computer can't pick up on that pattern automatically so with us all this imply about the setting of self-driving of course we don't care about dog breeds too much maybe we will at some point but for now we really care about Ling line markings objects where they are where we can drive and so on so the way we do this is we don't have labels like iguana for images but we do have images from the fleet like this and we're interested in for example in line markings so we a human typically goes into an image and using a mouse annotates the lane line markings so here's an example of an annotation that a human could create a label for this image and it's saying that that's what you should be seeing in this image these are the lane line markings and then what we can do is we can go to the fleet and we can ask for more images from the fleet and if you ask the fleet if you just do a nice job of this and you just ask for images at random the fleet might respond with images like this typically going forward on some highway this is what you might just get like a random collection like this and we would annotate all that data now if you're not careful and you only annotate a random distribution of this data your network will kind of pick up on this this random distribution on data and work only in that regime so if you show it slightly different example for example here is an image that actually the road road is curving and it's a bit of a more residential neighborhood then if you show the neural network this image that network might make a prediction that is incorrect it might say that okay well I've seen lots of times on highways lanes just go forward so here's a possible prediction and of course this is very incorrect but the neural network really can't be blamed it does not know that the Train on the the tree on the left whether or not it matters or not does not know if the car on the right matters were not towards the lane line it does not know that the that the buildings in the background matter or not it really starts completely from scratch and you and I know that the truth is that none of those things matter what actually matters is that there are a few white lane line markings over there and in a vanishing point and the fact that they curl a little bit should pull the prediction except there's no mechanism by which we can just tell the neural network hey those linge line markings actually matter the only tool in the toolbox that we have is labeled data so what we do is mean to take images like this when the network fails and we need to label them correctly so in this case we will turn the lane to the right and then we need to feed lots of images of this to the neural net and neural that over time will accumulate will basically pick up on this pattern that those things there don't matter but those line line markings do and we learn to predict the correct lane so what's really critical is not just the scale of the data set we don't just want millions of images we actually need to do a really good job of covering the possible space of things that the car might encounter on the roads so we need to teach the computer how to handle scenarios where it's light and wet you have all these different specular reflections and as you might imagine the brightness patterns and these images will look very different we have to teach the computer how to deal with shadows how to deal with Forks in the road how to deal with large objects that might be taking up most of that image how to deal with tunnels or how to deal with construction sites and in all these cases there's no again explicit mechanism to tell the network what to do we only have massive amounts of data we want to source all those images and we want to annotate the correct lines and the network will pick up on the patterns of those now large and varied datasets make basically make these networks work very well this is not just a finding for us here at Tesla this is a ubiquitous finding across the entire industry so experiments and research from Google from facebook from Baidu from alphabets deepmind all show similar plots where neural networks really love data and love scale and variety as you add more data these neural networks start to work better and get higher accuracies for free so more data just makes them work better now a number of companies have number of people have kind of pointed out that potentially we could use simulation to actually achieve the scale of the data sets and we're in charge of a lot of Editions here maybe we can achieve some variety in a simulator now at Tesla and that was also kind of brought up in the question questions just before this now at Tesla this is actually a screenshot of our own simulator we use simulation extensively who use it to develop and evaluate the software we've also even used it for training quite successfully so but really when it comes to training data from neural networks there really is no substitute for real data the simulator simulations have a lot of trouble with modeling appearance physics and the behaviour of all the agents around you so there are some examples to really try that point across the real world really throws a lot of crazy stuff at you so in this case for example we have very complicated environments with snow with trees with wind we have various visual artifacts that are hard to simulate potentially we have complicated construction sites bushes and plastic bags that can go in that can kind of go around with the wind a complicated construction sites that might feature lots of people kids animals all mixed in and simulating how those things interact and flow through this construction zone might actually be completely completely intractable it's not about the movement of any one pedestrian in there it's about how they respond to each other and how those cars will respond to each other and how they respond to you driving in that setting and all of those are actually really tricky to simulate it's almost like you have to solve the self-driving problem to just simulate other cars in your simulation so it's really complicated so we have dogs exotic animals and in some cases it's not even that you can't simulate it is that you can't even come up with it yeah so for example I didn't know that you can have truck on truck on truck like that but in the real world you find this and you find lots of other things that are very hard to really even come up with so really the variety that I'm seeing in the data coming from the fleet is just crazy with respect to what we have in a simulator we have a really good simulator simulation you're fundamentally a grain you're grading your own homework so you you know you if you know that you're gonna simulate it okay you can definitely solve for it but as andre is saying you don't know what you don't know the world is very weird and has millions of corner cases and if you somebody can produce a self-driving simulation that accurately matches reality that in itself would be in a monumental achievement of human capability they can't there's no way yeah yeah so I think the three points are I really tried to drive home until now are to get neural networks to work well you require these three essentials you require a large data set a very data set and a real data set and if you have those capabilities you can actually train your networks and make them work very well and so why is Tesla in such a unique and interesting position to really get all these three essentials right and the answer to that of course is the fleet we can really source data from it and make our neural network systems work extremely well so let me take you through a concrete example of for example making the object detector work better to give you a sense of how we develop these in all that works how we iterate on them and how we actually get them to work overtime so object detection is something we care a lot about we'd like to put bounding boxes around say the cars and the objects here because we need to track them and we need to understand how they might move around so again we might ask human annotators to give us some annotations for these and humans might go in and might tell you that ok those patterns over there are cars and bicycles and so on and you can train your neural network on this but if you're not careful the neural network hole will make miss predictions in some cases so as an example if we stumble by a car like this that has a bike on the back of it then the neural network actually went when I joined would actually create two deductions it would create a car deduction and a bicycle deduction and that's actually kind of correct because I guess both of those objects actually exist but for the purposes of the controller and a planner downstream you really don't want to deal with the fact that this bicycle can go with the car the truth is that that bike is attached to that car so in terms of like just objects on the road there's a single object a single car and so what you'd like to do now is you'd like to just potentially annotate lots of those images as this is just a single car so the process that we that we go through internally in the team is that we take this image or a few images that show this pattern and we have a mechanism a machine learning mechanism by which we can ask the fleet to source us examples that look like that and the fleet might respond with images that contains those patterns so as an example these six images might come from the fleet they all contain bikes on backs of cars and we would go in and we would annotate all those as just a single car and then the the performance of that detector actually improves and the network internally understands that hey when the bike is just attached to the car that's actually just a single car and it can learn that given enough examples and that's how we've sort of fixed that problem I will mention that I talked quite a bit about sourcing data from the fleet I just want to make a quick point that we've designed this from the beginning with privacy in mind and all the data that we used for training is anonymized now the fleet doesn't just respond with bicycles on backs of cars we look for all the thing we'll look for lots of things all the time so for example we look for boats and the fleet can respond with boats we look for construction sites and the fleet can send us lots of construction sites from across the world we look for even slightly more rare cases so for example finding debris on the road is pretty important to us so these are examples of images that have streamed to us from the fleet that show tires cones plastic bags and things like that if we can source these at scale we can annotate them correctly and the neural network will learn how to deal with them in the world here's another example animals of course also a very rare occurrence an event but we want the neural network to really understand what's going on here that these are animals and we want to deal with that correctly so to summarize the process by which we iterate on neural network predictions looked something like this we start with a seed data set that was potentially sourced at random we annotate that data set and then we train your lab works on that data set and put that in the car and then we have mechanisms by which we notice inaccuracies in the car when this detector may be misbehaving so for example if we detect that the neural network might be uncertain or if we detect that or if there's a driver intervention or any of those settings we can create this trigger infrastructure that sends us data of those inaccuracies and so for example if we don't perform very well on Lane line detection on tunnels then we can notice that there's a problem in tunnels that image would enter our unit tests so we can verify that we've actually fixing the problem over time but now what you do is to fix this inaccuracy you need to source many more examples that look like that so we asked the fleet to please send us many more tunnels and then we label all those tunnels correctly we incorporate that into the training set and we retrain to network redeploy and iterate this cycle over and over again and so we refer to this iterative process by which we improve these predictions as the data engine so iteratively deploying something potentially in shadow mode sourcing inaccuracies and incorporating the training set over and over again and we do this basically for all the predictions of these neural networks now so far I've talked about a lot of explicit labeling so like I mentioned we asked people to annotate data this is an expensive process in time and also we respect - yeah it's just an expensive process and so these annotations of course can be very expensive to achieve so what I want to talk about also is really to utilize the power of the fleet you don't want to go through this human annotation bottle like you want to just stream in data and automate it automatically and we have multiple mechanisms by which we can do this so as one example of a project that we recently worked on is the detection of currents so you're driving down the highway someone is on the left or on the right and they cut in in front of you into your lane so here's a video showing the autopilot detecting that this car is intruding into our lane now of course we'd like to detect a current as fast as possible so the way we approach this problem is we don't write explicit a code for is the left blinker on is the right blinker on track the keyboard over time and see if it's moving horizontally we actually use a fleet learning approach so the way this works is we ask the fleet to please send us data whenever they see a car transition from a right lane to the center lane or from left to Center and then what we do is we rewind time backwards and we automatically can annotate that hey that car will turn will in 1.3 seconds cut in in front of the unfair view and then we can use that for training than your lat and so the neural net will automatically pick up on a lot of these patterns so for example the cars are typically Yod then moving this way maybe the blinker is on all that stuff happens internally inside the neural net just from these examples so we asked the fleet to automatically send us all this data we can get half a million or so images and all of these would be annotated for currents and then we train the network and then we took this cut in network and we deployed it to the fleet but we don't turn it on yet we run it in shadow mode and in shadow mode the network is always making predictions hey I think this vehicle is going to cut in from the way it looks this vehicle is going to cut in and then we look for mispredictions so as an example this is an clip that we had from shadow mode of the cutting Network and it's kind of hard to see but the network thought that the vehicle right ahead of us and on the right who is going to cut in and you can sort of see that it's it's slightly flirting with the lane line it's trying to it's sort of encroaching a little bit and the network got excited and it thought that that was going to be cutting that vehicle will actually end up in our center lane that turns out to be incorrect and the vehicle did not actually do that so what we do now is we just turn the data engine we source that ran in the shadow mode is making predictions it makes some false positives and there are some false negative detections so we got overexcited and sometimes and sometimes we miss the current when it actually happened all those create a trigger that streams to us and that gets incorporated now for free there's no humans harmed in the process of labeling this data incorporated for free into our training set we retrain the network and redeploy the shadow mode and so we can spin this a few times and we always look at the false positives and negatives coming from the fleet and once we're happy with the false positive false negative free show we actually flip the bit and actually let the car control to that Network and so you may have noticed we actually shipped one of our first versions of a copy intact architecture approximately I think three months ago so if you've noticed that the car is much better at detecting cutters that's fleet learning operating at scale yes it actually works quite nicely so let's plate learning no humans were harmed in the process it's just a lot of network training based on data and a lot of shadow mode and looking at those results another very centrally like everyone's training the network all the time is what it amounts to whether that whether it water to order pilots on or off the network is being trained every mile that's driven for the car but that's harder to or above is training the network yeah another interesting way and that we use this in a scheme of fleet learning at the other project that I'll talk about is a prediction so while you are driving a car what you're actually doing is you are entertaining the data because you are steering the wheel you're telling us how to traverse different environments so what we're looking at here is a some person in the fleet who took a left through an intersection and what we do here is we we have the full video of all the cameras and we know that the path that this person took because of the GPS the inertial measurement unit the wheel angle the wheel ticks so we put all that together and we understand the path that this person took through this environment and then of course this this we can use this for supervision for the network so we just source a lot of this moon fleet we train a neural network on the on those trajectories and then the neural predicts paths just from that data so really what this is referred to typically as it's called imitation learning we're taking human trajectories from the real world I'm just trying to imitate how people drive in real worlds and we can also apply the same data engine crank to all of this and make this work over time so here's an example of pac prediction going through a kind of a complicated environment so what you're seeing here is a video and we are overlaying the predictions of the network so this is a path that the network would follow in green and some yeah maybe the crazy thing is the network is predicting paths it can't even see with incredibly high accuracy they can't see around the corner but it would but it's saying the probability of that curve is extremely high so that's the path and it nails it you will see that in the cars today but we're gonna turn on augmented vision so you can see the lane lines and the path predictions of the cars overlaid on the video yeah there's actually more going on under to hood that you can even tell me it's kind of scary you know of course there's a lot of details I'm skipping over you might not want to annotate older drivers you might annotate just you might want to just imitate the better drivers and there's many technical ways that we actually slice and dice that data but the interesting thing here is that this prediction is actually a 3d prediction that we project back to the image here so the path here forward is a three-dimensional thing that we're just rendering in 2d but we know about the slope of the ground from all this and that's actually extremely valuable for driving so math prediction actually is live in the fleet today by the way so if you're driving clover Leafs if you're in a clover leaf on the highway until maybe five months ago or so your car would not be able to do clover leaf now it can that's a prediction running live on your cars we've shipped this a while ago and today you are going to get to experience this for traversing intersections a large component of how we go through intersections in your drives today is all sourced from a prediction from automatic labels so I talked about so far is really the three key components of how we iterate on the predictions of the network and how we make it work over time you require large varied and real data set we can really achieve that here at Tesla and we do that through the scale to fleet the data engine shipping things in shadow mode iterating that cycle and potentially even using fleet learning where no human annotators are harmed in the process and just using data automatically and we can really do that at scale so in the next section of my talk I'm going to especially talk about depth perception using vision only so you might be familiar that there are at least two sensors in the car one is vision cameras just getting pixels and the other is lidar that a lot of a lot of companies also use and lidar gives you these point measurements of distance around you now one one thing I'd like to point out first of all is you all came here you drove here many of you and you used your your neural net and vision you were not shooting lasers out of your eyes and you still ended up here we might have so clearly the human neural net derives distance and all the measurements and a 3d understanding of the world just from vision it actually uses multiple cues to do so I'll just briefly go over some of them just to give you a sense of roughly what's going on and inside as an example we have two ice pointed out so you get two independent measurements at every single time step of the roll ahead of you and your brain stitches this information together to arrive at some depth estimation because you can triangulate any points across those two viewpoints a lot of animals instead have eyes that are positioned on the sides so they have very little overlap in their visual fields so they will typically use structure from motion and the idea is that they bob their heads and because of the movement they actually get multiple observations of the world and you can triangulate again depths and even with one eye closed and completely motionless you can still have some sense of depth perception if you did this I don't think he would notice me coming two meters towards you or 100 back and that's because there are a lot of very strong monocular cues that your brain also takes into account this is an example of a pretty common visual illusion where you have you know these two blue bars are identical but your brain the way it stitches up the scene is it just expects one of them to be larger than the other because of the vanishing lines of this image so your brain does a lot of this automatically and and neural nets artificial neural at scan as well so let me give you three examples of how you can arrive at depth perception from vision alone a classical approach and two that rely on your own networks so here's a video going down I think this is San Francisco of a Tesla so this is these are our cameras our sensing and we're looking at all I'm only showing the main camera but all the cameras are turned on the eight cameras of the autopilot and if you just have this six second clip what you can do is you can stitch up this environment in 3d using multi-view stereo techniques so this oops this is supposed to be a video is it not a video although I know it so there we go so this is the 3d reconstruction of those six seconds of that car driving through that path and you can see that this information is purely is it's very well recoverable from just videos and roughly that's through process of triangulation and as I mentioned multi-view Syria and we've applied similar techniques on a slightly more sparse and approximate also in the car so it's remarkable all that information is really there in the sensor and just a matter of extracting the other project that I want to briefly talk about is as I mentioned there's nothing about neural network neural networks are very powerful visual recognition engines and if you want them to predict depth then you need to for example look for labels of depth and then they can actually do that extremely well so there's nothing limiting networks from predicting this monocular depth except for label data so one example project that we've actually looked at internally is we use the forward-facing radar which is shown in blue and that radar is looking out and measuring depths of objects and we use that radar to annotate the what vision is seeing the bounding boxes that come out of the neural networks so instead of human annotators telling you okay this this car and this bounding box is roughly twenty-five meters away you can annotate that data much better using sensors so you sensor annotation so as an example radar is quite good at that distance you can annotate that and then you can train your let work on it and if you just have enough data of it this neural network is very good at predicting those patterns so here's an example of predictions of that so in circles I'm showing radar objects and in and the keyboards that are coming out or here are purely from vision so the keyboards here are just coming out of vision and the depth of those cuboids is learned by a sensor annotation from the radar so if this is working very well then you would see that the circles in the top-down view would agree with the cuboids and they do and that's because neural networks are very competent at predicting depths they can learn the different sizes of vehicles internally and they know how big those vehicles are and you can actually derive depth from that quite accurately the last mechanism I will talk about very briefly is a slightly more fancy it's a bit more technical but it is a mechanism that has recently there's a few papers basically over the last year or two on this approach it's called self supervision so what you do in a lot of these papers is you only feed raw videos into neural networks with no labels whatsoever and you can still learn you can still get neural networks to learn depth and it's a bit a little bit technical so I can't go into the full details but the idea is that the neural network predicts depth at every single frame of that video and then there are no explicit targets that the neural network is supposed to regress to with the labels but instead the objective for the network is to be consistent over time so whatever depth you predict should be consistent over the duration of that video and the only way to be consistent is to be right as the neural network automatically predicts the correct depth for all the pixels and we've reproduced some of these results internally so this also works quite well so in summary people drive with vision only no no lasers are involved this seems to work quite well the point that I'd like to make is that visual recognition and very powerful visual cognition is is absolutely necessary for autonomy it's not a nice to have like we must have neural networks that actually really understand the environment around you and and lidar points are much less information richer environment so vision really understands the full details just a few points around are much there's much less information in those so as an example on the left here is that a plastic bag or is that a tire well lidar might just give you a few points on that but vision can tell you which one of those two is true and that impacts your control is that person who is slightly looking backwards are they trying to merge into your lane on the bike or are they just or are they just going forward in the construction sites what do those signs say how should I behave in this world the entire infrastructure that we have built up for roads is all designed for human visual consumption so all the signs all the traffic lights everything is designed for vision and so that's where all that information is and so you need that ability is that person distracted and on their phone are they going to work walk into your Lane those answers to all these questions are only found in vision and are necessary for level 4 level 5 autonomy and that is the capability that we are developing at Tesla and through this is done through combination of large scale new level training through data engine and getting that to work over time and using the power of the fleet and so in this sense lidar is really a shortcut it's sidesteps the fundamental problems the important problem of visual recognition that is necessary for autonomy and so it gives a false sense of progress and is ultimately ultimately crutch it does give like really fast demos so if I was to summarize the entire my entire talk in one slide it would be this every all of autonomy because you want level 4 level 5 systems that can handle all the possible situations in in 99.9% of the cases and chasing some of the last few nights is going to be very tricky and very difficult and it's going to require a very powerful visual system so I'm showing you some images of what you might encounter in any one slice of that line so in the beginning you just have very simple cars going forward then those cars start to look a little bit funny then maybe you have bikes on cars then maybe of cars on cars they may be you start to get into really rare events like cars turned over or even cars airborne we see a lot of things coming from the fleet and we see them at some rate like a really good rate compared to all of our competitors and so the rate of progress at which you can actually address these problems iterate on the software and really feed the neural with the right data that rate of progress is really just proportional to how often you encounter these situations in the wild and we encountered them significantly more frequently than anyone else which is why we're going to do extremely well thank you [Applause] [Music]
Info
Channel: Eziz T
Views: 23,233
Rating: undefined out of 5
Keywords: Tesla Autonomy, Tesla neural network, AI tesla, Tesla image recognition, shadow mode
Id: eZOHA6Uy52k
Channel Id: undefined
Length: 34min 13sec (2053 seconds)
Published: Tue Apr 23 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.