AI for Makers - Learn with JetBot

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone attending today's webinar AI for makers learn with jet bot hosted by John Welsh before we begin we wanted to cover a few housekeeping items at the bottom of your screen are multiple application widgets all of the widgets as well as a slide area are resizable and movable so feel free to move them around to get the most out of your desktop space if you have any questions during the webinar please submit them through the Q&A widget we'll try to cover as many questions as possible during the Q&A portion but if there's any we don't get to please post them on the forum to keep the conversation going be sure to check out the resource list for links to the developer site forums wiki and more and if you run into any technical issues during the webinar you can find answers to some common questions located in the help widget at the bottom of your screen an on-demand version of the webinar as well as the slide deck will be emailed to you later today and with that let's get started hello everyone my name is John Welsh and I'll be presenting today's webinar titled AI for makers learn with jet bot before we get started I want to give a brief introduction to who I am and what I do it in video my title at Nvidia is developer technology engineer what that means is I focus on applications of Nvidia Jetson typically using deep learning it also means that I create open source software and content that enables other developers to create things using Nvidia Jetson before joining Nvidia I was introduced to deep learning during my master's thesis where I used deep learning for pose recognition to enable a robot to more reliably follow a person my knowledge of deep learning has been helpful in creating projects like Jeff bot which we built here at Nvidia to show people how to get started with Jetson nano Jeff bought is open-source and comes with the full bill of materials needed to create the robot from scratch it also has a hardware setup guide with pictures and instructions so that you can build it yourself the examples in Jeff bot will take you through the basics like using Python to program your web browser and controlling basic motions of the robot to full AI workflows will you teach Jeff bot how to avoid collisions we hope that by building jet BOTS and running through the examples you'll feel more comfortable with jets in nano and how to apply AI to new problems like how we applied the jet bot road following sample to this RC car or like the Nvidia green machine that our interns built last summer using the Jetson tx2 we think that there's a class of projects that Jetson nano is particularly well-suited for these projects usually include low latency camera input real-time processing using deep learning on the Nvidia Jetsons combined cpu GPU architecture and hardware interfacing either using the USB ports or the GPIO pins provided on the Jetson nano developer kit we hope to teach you how to create projects with Jetson Nano by running through jet bot so the way we'll structure this webinar is will first cover the hardware that we used in jet bot where you can discover components that will be useful for your own projects will then run through the software setup including how to create an SD card from scratch so that you'll learn how to structure your project to be reproducible and then we'll run through the examples where you'll learn how to apply AI to new problems which will be useful for creating your own projects so let's get started by going through the hardware of jet bot and we'll focus on the pieces that will be useful for creating your own projects the hardware and jet bots can be accessed by navigating to github.com slash Nvidia AI IOT / jet bot if you navigate to the wiki tab you'll find the Bill of Materials in the home page of the wiki under common components you'll see a list of hardware that is needed for each jet bot that you want to build for each jet buy you'll need to order the number listed under the quantity column perhaps obvious but the most in central component you'll need when building jet pod is an Nvidia Jetson nano you can purchase this by clicking the link under the URL column this will take you to the nvidia web page where you can buy directly from nvidia or if you hit by now where you'll see a list of third-party vendors like aero amazon seed spark fund and Newegg which sell the development kit as well the next item is the Micro SD card for which we list the size of 64 gigabytes you can purchase it from Amazon at the link listed the reason we use a 64 gigabyte SD card is to allow plenty of space to store data when you're collecting your data set with Jeff bot you could probably also get away with a 32 gigabyte SD card for Jeff bot we wouldn't recommend a 16 gigabyte SD card because you may not be able to comfortably store the data set and install additional software on top of our SD card image when using an SD card smaller than 64 gigabytes you may need to follow the Create SD card from scratch under the wiki which we'll cover later in this webinar the next item in the Bill of Materials is the micro USB power supply you can purchase the power supply from Adafruit this is also the power supply that's recommended in the getting starting with jets and nano guide it's important that the power supply that you use over microUSB can supply 5 volts at 2 amps even with the appropriate micro USB power supply it's possible that the Jetson Nano can draw more power than can be consumed over the micro USB port to remedy this and run the Jetson nano at lower power consumption you can use a tool called env P model in a terminal we can set the Jetson Nano to 5 watt mode by typing sudo and VP model m1 to check the current power mode of the Jetson Nano we can type sudo and VP model dash Q we can see that the jets and nanos in 5 watt mode to configure the Jetson Nano to run back in 10 la mode we can type sudo and VP model dash M zero instead of reducing the power consumption of the Jetson Nano using NVP model we could also provide more power to the Jetson nano development kit using some other options these options include powering at 5 volts 4 amps over the barrel plug adapter or by powering at 5 volts 6 amps using 2 of the 3 amp pins on the GPIO header or as mentioned earlier powering at 5 volts 2 amps over microUSB for Jeff bot we're powering over the microUSB port because of the simplicity but for your project depending on the power consumption you might want to use one of the other options the power supply that we list in jet bot can be used to power the Jetson Diana directly but we also use it to charge the battery pack the battery we list in the bill of materials is a consumer battery packed that's typically used to charge mobile devices we selected a consumer battery pack because it's cheap and it has the charging circuitry built in but these battery packs often have caveats so it's important to test the battery pack for your application we tested this battery pack with the full jet bot configuration through all the examples to make sure that it could supply enough power for the components the next notable item on the bill of materials is the motor driver the motor driver that we selected from Adafruit takes command over i2c pins on the GPIO header generates a PWM signal which is used to drive an h-bridge and ultimately control the motors it was especially easy to control these motors with software given the new NVIDIA jetson support for the i2c and GPIO portions of the Adafruit blinka library this support allows us to use Python libraries like the Adafruit motor kit library which makes it very easy to control motors this also allows us to use various other sensors that are supported by Adafruit blinka I 2c and GPIO other hardware distributors like seed studio have also announced support for NVIDIA Jetson Nano which allows us to use various of their Grove peripherals with jetson Nano these features make it much easier to incorporate hardware into your AI projects for example we could use the pi o LED display from Adafruit in Python without having to write our own driver the next notable items on the billow material are the 3d printed chasis and camera mount we provide some tips on how to obtain these components in a separate page titled 3d printing on the wiki if you don't have a 3d printer it's possible that you could use a printing service to print the chasis and then mail it the next notable item on the Bill of Materials is the camera we choose a wide field of view camera option for Jeff bot so that I can see the ground immediately below it to detect obstacles and features as well as see slightly above the horizon in case it needs to track an object we restricted the camera placement of Jeff bot to improve the generalizability of neural network strain for jet BOTS across different robots that said because we wanted to showcase different camera options that you can use with NVIDIA jets and nano and make the jet bot project more accessible in case an items out of stock we list two options for the camera one option is the leopard imaging h1 for five camera and another option is the Raspberry Pi v2 camera module combined with a wide angle attachment lens the field of view of the cameras are slightly different so it's possible that a neural network trained on one camera won't generalize as well when used on a different camera however as we'll learn later in the examples it's very easy to collect your own data set and train a neural network yourself this will be a very useful skill when creating your own projects because you might want to do something other than the collision-avoidance example that we're doing with Jeff bot when you're picking a camera for your project an important feature to look for is whether it has a driver on for Nvidia Jetson the Sony IMX 219 sensor does have drivers so most cameras with this sensor and the appropriate connector will work with Jetson Nano for example our do cam offers various camera boards based on the sensor that have the correct connector and also allow for a custom lens selection using either an m12 or CS lens mount they also provide a comprehensive lens kit with fields of view ranging from 10 to 200 degrees that you could use to Pro type your projects the final item that we'll cover on the Bill of Materials that we think will be useful for your own projects is the Wi-Fi solution similar to how we offer two options for cameras we also offer two options for Wi-Fi one option is to get an m2 dual band wireless card along with a pair of UFL connector antennas the other option is to get a USB Wi-Fi dongle it's important to ensure that the Wi-Fi solution you choose has drivers for Nvidia jetson both of the options that we list on the jet pot Bill of Materials do have drivers so these are good options to choose when you're building your own project for the jet boat project we recommend the first option which is the m2 card and antennas because we have the most experience applying this with the full system configuration that concludes our discussion on the hardware components of jet bot you can build jet by by following the hardware setup guide in the wiki on github now let's talk about the software components of jet pod like how to install tensorflow and PI torch and create a custom SD card for your project before we go through the steps of how to install the jet bot software let's talk about how the jet bot projects organized jet BOTS hosted on github as a repository there's a selection of files and folders that are versioned an important file in the project is the setup iPhone file this will call scripts to build and package our project so that we can import it in Python without having to configure the path under the jet BOTS folder we can see different software components like a camera class motor class and some other utilities that we found common throughout the jet bot examples so we wanted to make the abstractions if you dig into the code you can learn more about how we structured the project and use libraries like open CV to capture video from the camera in the notebooks folder you'll see different examples organized by name if we navigate into one of the examples we'll see the set of ipython notebooks broken into their different stages the reason we have three separate notebooks for the collision-avoidance example is in case you wanted to train the model on a device other than the Nvidia Jetson Nano we put the documentation and content that wasn't as important to be versioned in the wiki so that we could easily adjust it and make it fit our needs we found it important in the documentation to have all the components that someone visiting the repository would need in order to recreate what we've done and learn from it well the most important thing when building your AI project is to have fun and do something that interests you there's an opportunity for others to learn from your work if you include the necessary information for them to understand what your projects about and how to reproduce it one way to make your AI project more easily reproducible is to create a custom SD card image you can pre configure the SD card image with all the necessary software and system configurations so that when you want to use your project elsewhere or you want others to use your projects that you don't have to run through all these tedious steps as always make sure you're compliant with the licenses of the software that you're using this is what we did for the jet bot project so all anyone has to do is download our compressed SD card image flash it onto the SD card plug it into the Jetson Nano and then boot up so let's show how you can create your own SD card image from scratch to create the jet pot SD card image we start from the SD card image mentioned in the getting started with jetson nano developer following the instructions in the getting started guide we flashed the SD card image using a tool for either Windows Mac or Linux we then plug the SD card image into our Jetson Danna developer kit and boot up the first time you boot from the standard SD card image mentioned in the getting started guide you need to configure the Linux operating system with the username and password for the jet bought project we set this to jet bot and jet bot and we also select to login automatically so that will automatically connect to Wi-Fi when we turn on the robot we then install various software dependencies one of the first ones we need to install is PIP for Python 3 which is a package installer for Python we do this in a terminal on the Jetson Nano by typing sudo apt-get install Python 3 pip in video provides an official pre-built tensorflow pip wheel that has instructions at the link shown if you even follow these instructions on the Jetson Nano from a terminal like we did to install the PIP package you'll be able to use tensorflow from python the next notable package that we install for jet bot that we think will be very useful for your projects is pi torch on our developer forums we posted pre-built PI torch pip wheels for Python 2.7 and python 3.6 if you copy the commands listed under the version you want to use into a terminal on the Jetson Nano you can easily install the PI torch pip wheel and then begin using it in Python one feature of jet bot that we find that people are particularly interested in is that it's programmed from a web browser using jupiter lab in our create SD card image from scratch we list the steps necessary in order to start install Jupiter lab and configure a password on jetson nano by default if you wanted to use Jupiter lab you would need to launch the service manually every time that you'd boot the Jetson Nano you would do this by typing the following command however we wanted it to be much more convenient by allowing the Jupiter lab service to start immediately when the Jetson Anna booted as well as a service that would display the IP address so that all you need to do to program the robot is read the IP address and then connect to that IP address on your browser we provide the scripts to generate these services as part of the jet bot Python package so all you need to do is clone and install the jet bot package and then execute the scripts and enable the system services and then enable the system services so that the jupiter lab server and the stats display script will start at boot as a final step we create a 4 gigabyte swap file that's enabled at boot so that runtime memory that's accessed less frequently can be placed on permanent storage once you have your Jetson Nano configured just how you want it you can create a snapshot of the SD card image that you can compress to use in other copies of your project or so other users can easily use your project to do this you would first shut down the Jetson nano and remove the SD card from the SD card slot then you would plug the SD card into a Linux desktop machine and check where the device is located by typing sudo fdisk - shell assuming the SD card is located at dev slash SD B we could record a snapshot of the SD card image to a file by typing the following command to greatly reduce the size of this file we would then compress it using the tools if by typing the following this new compressed image that we just created could be used just like the SD card image in the getting started with jets and nano developer kick guide using the tool edger this tool is able to accept the compressed image directly that concludes our discussion on how the jet project structured and how you can structure your own project to be more reproducible the content that we've covered in this webinar so far shouldn't be so unfamiliar if you're working with raspberry Pi now we'll get into the area where Nvidia jets and really excels which is real-time deep learning and we'll explore these concepts through the examples provided in jet by the examples in jet bar are in the form of Jupiter notebooks that you can access by connecting to the robot from your web browser after you followed the initial setup and power on your jet pod the IP address will automatically be displayed on the PI o LED screen when the jet bot boots up we enter this IP address in our web browser with the port eight eight eight eight we're presented with the notebooks folder that we mentioned earlier in the github repository we've copied them into a folder titled notebooks just to be slightly more convenient and we're also presented with the first example notebook called basic motion the basic motion notebook will help you get familiarized with the Jupiter notebook environment like how you can execute code to call different commands like how we call left here use widgets to control different parameters of the robot the next example in the series is the teleoperation example where you'll use a gamepad controller connected to your host computer the gamepad is accessed through the web browser using html5 in this example you'll also learn how to preview live video from the robot within the jupiter notebook itself using consecutive compressed JPEG frames the next example in the series is the collision avoidance example this example is broken into three parts data collection where we'll collected classification data set to tell whether there's an imminent collision or whether the robot is safe to move forward will then train the model with the trained model notebook and then we'll deploy the model for a live demo where the robot will move around and turn if it thinks it's unable to move forward so let's get started with the data collection notebook in this notebook the first thing we do is display the live camera feed we set the camera to capture at a width and height of 224 pixels because this is what will be fed into the neural network you could capture at higher resolution and then down sample later in case you wanted to experiment with different neural network input sizes but most of the neural networks that we'll be using for classification default to 224 by 224 pixels and from our tests this is sufficient for the task at hand but we do encourage you to explore different options and try new things out to see how you can trade-off between performance and accuracy the next block of code will create a data set folder that we can see in the jupiter file browser on the left if we navigate into the data set folder we'll see two directories named blocked and free these will store the images of each class the blocked folder will contain images of instances where the robot is unable to move forward and the free folder will contain images of instances where the robot can move forward the following blocks of code will create widgets that we can use as buttons to save images to each class folder and then we attach functions to actually perform this operation to the widgets we then display all the widgets together for convenience we'll now go ahead and collect our data set when collecting our data set it's important to get a good amount of data but even more important is the variety of the data we want to show different scenarios that the robot could be blocked like a Ledge or an object like my cell phone maybe a cable in front of it on the ground the corner of my laptop may be the corner of my laptop at a different angle and then we want to show different instances when it's free like basically just different angles away from the object region so now we'll go ahead and click the data set using my desk as a training environment we'll see how well the model performs on my desk as well as environments that weren't in the training set [Music] [Music] I finished collecting about a hundred images of each class and it took a total of five minutes we can see the images by navigating using the jupiter lab file browser let's go ahead and open up the training notebook before we begin the training notebook I'm going to shut down my jet bot and power directly with the power adapter now I have opened the training notebook in the training notebook we'll use PI torch to define and train our neural network the pipe the pipe torch library provides some utilities for loading data for example there's an image folder class which is able to accept data in the format that we stored it in it also allows us to apply transformations like randomly adjusting the hue and brightness which can help improve the generalizability of our model it's also possible to create your own dataset implementation that could accept datasets in formats other than this image folder format we'll do this in our road following example where we create a custom data set that loads images along with X Y coordinates of a target in the image because our custom data set class implements the torch data set length and getitem methods we can use it with the torch data utilities like this random split function and the data loader which produces batches of samples and allows shuffling as well as multi-threaded loading after we've defined our dataset we'll define our neural network the model that we'll be using is called Alex net and we'll start from a pre train model that was trained on the image net data set the reason we start from a pre train model is that the neural network has already learned import important features of natural images like textures and edges that will be useful in accelerating the training of our own model it also can improve generalizability because these features were learned over many more images than our data set will have the Alex net model that we'll be using is defined in the torch vision package you can see its implementation here it's defined as a PI torch module and the pre-trained weights are stored at this path and our implementation will replace the final matrix multiplication with a matrix multiplication that has an output dimension of 2 the reason we do this is that the original model was defined for a thousand classes while we only have two classes blocked and free the torch vision package also has other neural network architectures that span the spectrum of computational complexity and accuracy in our road following example we'll use resin at 18 and show how you can modify a layer in this network to work with the output dimension that you want we also hope to soon show in this project how you can accelerate many of these model architectures using two NVIDIA tensor RT to get faster inference speeds on the Jetson Anna the last notable item in the training notebook that I want to mention is that it is the PI torch optimizer for this example we're using stochastic gradient descent and we're optimizing over all of the parameters in the network the stochastic gradient descent optimizer takes a few parameters like the learning rate and the momentum you could adjust these parameters to change how the neural network converges during training another thing you could adjust is which parameters that you want to optimize over for example if we wanted to optimize over just the final layer in the neural network we could do this let's go ahead and run the training loop this will optimize a loss value that's computed using our predicted outputs and the true outputs this loop will also test against our test data set split we can create a new window for the output of this code cell by right-clicking and select create new view for output this will display the history of our test accuracy over each epoch during training it took about five minutes to run 15 a pox over our entire data set and the final model is stored in the Jupiter file browser to the left as the complexity of your data set or your neural network architecture increases you might need to consider using a desktop GPU machine or cloud GPU machine for faster training let's see how our neural network performs to do this we open up the live demo notebook now we have our live demo notebook open and we can see and we can where we can see the live video feed from the robot as well as a slider indicating the probability that the robot thinks that it's blocked let's go ahead and see how the robot responds to different situations on my desk as you can see it did pretty well but when I put the robot on a different surface like this more textured hardwood floor surface it's a little more confused likely because our training set only included one surface it's also very confused on this cloth surface given the textures I'm going to go ahead and collect different data sets on these different surfaces to see how well the robot does when we add new data into the mix I'll keep my data sets disjoint so that I can evaluate how well the neural network trained under different sets of data generalizes to set the data that aren't in training set it took me about five minutes per data set to collect the other two data sets which also have about 100 images per class I now have three data sets in the file browser to the left data set bed data set desk in data set floor I'm going to go ahead and train a four different neural networks three of them will be trained on each data set independently and then one will be trained on all three data sets at once I'll then test the accuracy of each neural network against each situation as well now that I've finished training you can see this matrix which shows the accuracy of the five different models that we trained against each three test scenarios it's important to note our test and train data sets are completely disjoint in no scenario are we using an image that we trained on for evaluation we can see that the bed data set generalizes the best across the different test sets possibly due to the more varied surface complexion on the of the cloth unsurprisingly training on all three data sets combined resulted in the highest accuracy and it actually approved the ability in some scenarios for the model to generalize better within its own environment by selecting a random subset of all three training dets sets combined we have this data set that's labeled all this model is trained on the same amount of images as the other data sets so that we're not biasing by the number of images we're using as you can see this model even though it was trained with the same number of images is able to generalize much better across environments then a model trained on any environment in particular but you can see it has slightly lower accuracy within any environment in particular now let's go ahead and take our best model train on 300 images and try it out in the live demo to see how it does I found that the features available in Jetson Nano were crucial in being able to rapidly prototype jet BOTS having the Ubuntu 18 point o4 desktop environment made it easy to install software like jupiter lab which was critical for prototyping my applications it was also nice to be able to select from a variety of deep learning frameworks to pick the one that suited the project's need the availability of the sony IMX 219 driver gave me a wide selection of affordable cameras to choose from and the support for various Python hardware libraries made it easy to integrate hardware into my project the performance of the Jetson Nano made it possible to do real-time inference which is necessary for real-time projects like jet pod and finally it's valuable to know that as my project scales up with more data more image streams larger neural network architectures or higher resolution inputs that there's a scaling path across jetsons and I don't need to change much of the software that concludes this presentation and if you're new to AI I hope you see how accessible it really is that with a $99 platform you can begin learning and incorporating AI into your projects to learn more about jet bot like how we did real-time object detection or our upcoming road following example check out the jet black github page if you're familiar with robotics operating system we also have a jet bot our OS port which has useful components for integrating jet BOTS with RLS thanks for your time and I look forward to seeing what kind of everyone this is John hopefully you enjoyed the webinar we got a lot of questions that came in so I'm going to try to cover as many of them as I can as much detail as I can so one of the first questions we got in the webinar was how you like how to choose which power modes that you want to run the Jetson in you saw the command so you can use NVP model so select send wot mode for five watt mode you can also come up with your own configuration so it's very flexible and being able to like trade-off between the performance and power consumption so I think it really depends on your application so for jet bot we're powering over the 5 volt 2 amp microUSB power supply and we have limitations on how much current the battery can power so we sometimes set it to 5 watt mode especially if you have different peripherals attached over USD that might be necessary so you don't draw more power than battery you can consume and also that it depends on the latency you need in your application if you don't need it to run super fast you could tune down the power and save some energy um so it really depends on the application but anyways you can trade-off between the power consumption pretty easily um the next question that I got was do I train the neural network on a PC and then transfer the train model to the Nano and it's a really good question because it kind of depends so basically for the collision avoidance example that you saw I did it entirely on the Nano um but obviously I'm limited by the what I can how large of a batch I can train on and the more data I have which as you saw means the more accurate the model will be the longer it's going to take the Train so you can get started on a nano and you can collect a data set for something like image classification using transfer learning but if you get more and more data you're probably going to need to scale up do like a GPU desktop or a GPU cloud instance um so another we saw a lot of questions also about the art you can lens kit and so the lens kit that I showed they're all using an m12 mount and I think they have different adapters to change the height of the that the lens can be melted away from the sensor so as far as I know I think you have to use the REO cam IMX 219 camera board with a with an m12 lens mount and then you should be able to use the lens kit and try out different fields of view because maybe you don't know exactly what you want to use for your project because we're not sure exactly where you're going to put your project so it's a good way to play around with different fields of view and figure out what works for your application so another question we got or a comment was that ah if there are alternative projects that you can get started with Jetson Anna with in case you don't want to do the full jet bot and right now we have we have Jeff bot we made that and so it's $150 in addition to the Jetson Nano but you can definitely get started with Nano without having a whole Jeff bot and you can use just parts of the jet bot if you want um so like I hope the first section this kind of gave you an idea of what hardware is compatible with justin nano so like the cameras and a battery pack that works so if you wanted to just get started with real-time deep like inference on the jets and nano you could probably just get away with the camera and the justin nano itself and maybe not even do the web programming um it gets more convenient when you have a battery and then you have the Wi-Fi and IP display because then it's portable which fun for playing around and testing things but you can definitely get started with just the camera in the Jetson nano and then the hope is that you can eventually you guys will start to post your own projects and other people can learn from that and get inspired and enry great what you've done so that's kind of the vote so but yeah right now we have Jeff bought but hopefully we'll see more projects emerge and more stuff come out that you guys can reproduce another question we got is if you can run Jeff bought the software on a TX - the TX - consumes a bit more power than the Jetson Nano and it's the diff the development kit for tiers 2 is larger so you wouldn't probably be able to use the same form factor a chatbot but the software at least the deep learning components are almost entirely in hi torch so that will run on the TX 2 as well because they have the same underlying software stack and even a lot of the drivers and the GPIO headers are supported across the entire Jetson platform suite so a lot of the software you can pull from and use on the TX - I think there's some aspects that wouldn't just work right out of the box but it's definitely doable and it's not too hard um because there's a lot of Python libraries in there to make things a lot easier um another question that God was is TRT for training tensor RT which is our accelerator for inference for training isn't it for inference um yes it's for its for inference so I maybe I misspoke somewhere in the webinar but tensor RT is for inference so what you would do is you train your model and a framework like PI torch and then once your models train you convert it to tensor RT and then you run the tensor T optimizer and that will basically do stuff like using layers and a bunch of stuff to accelerate your model for inference so that you can get usually like two to four times speed up when you're just trying to run the model after you've already trained it so hopefully we'll be able to document that more in the jet Bach project we're definitely looking into that so that you can really get the most out of the Jetson nano where you need a where you need the performance so interesting question I got to was about a group working on a robot to follow a person and they want to track the human in front of the robot so what's the best way to do this I spent a little bit of time on this problem and there's a lot of aspects of it so I think it's tends like within what scenarios and what reliability you want to follow the person so one of the hard things about following a person is if the person goes out of the scene or they're occluded how you recognize them again especially because their face might not be showing so the way I kind of works on this was using a technique called person reaiiy denta fication and there's some datasets out there so you would basically detect all the people in the images and then you'd select each person and you come up with basically an identity based off their general appearance and that could be including their clothes then combine this with something like face recognition and you have two metrics for being able to sort of recognize a person and follow them so if you're really trying to do the re identification that's that's kind of one way you could approach it but it's also if you think about how the robot will interact with the person maybe it's good enough like we did in the Jeff fobb project just to follow the person closest to the center of field of view and then the robot tracking that person will kind of naturally follow it and then if they get occluded the person would have to step in and have the robot start following it again but to kind of ways to do it look and see which other question another question we got is if there's OpenCV in the jet bot image and yes there is there's actually OpenCV in the standard Jetson Nano image that's in the getting started guide do we use OpenCV it's compiled with GStreamer support if you're not familiar with GStreamer it's um it's a way to create kind of like multimedia pipelines so what this lets us do is it lets us acquire images from the camera using the Nvidia jetsons ISP and that's an image signal processor so it's really fast way to access data and format it if you're acquiring it from a CSI camera so yes we have open CD and yes the open CV sports team streamers so yes you get accelerated camera acquisition which is primarily what I use it for and then feed that into a deep learning pipeline so another question is what FPS can we get for recognizing objects and with the models so in jet bot we have two tensor RT accelerated models there and that run at I think 14 fps for a mobile net V 2 and 18 fps for mobile SSE mobile net b1 um and the nice thing about these models is that you can use the tensor flow object detection API to train them for your own tasks so if you make your own data set with a bunch of bounding boxes you could train it on there and then you have to use one of those models for the for the tools that we have in the jet bot project but if you use one of those model architectures and you could train it to detect by whatever bounding box you want and then you can get basically 14 or 18 FPS depending on which model you're using on the Jetson Nano so another question is if the robot can recognize hand signals um like raising your hand to stop the robot or point to change the direction of the robot and that's that's a really cool question because I think gesture interaction is an area where Jetson Nano could really excel and um I mean out of the box the answer is it doesn't have that capabilities but the longer-term answer is that it's probably definitely possible so there's a few ways you could approach this hand signals one is post recognition so there's lots of pre train models for pose recognition I think there's actually some videos already online of people running open post on Jetson Nano so I would check that out but you could use out-of-the-box pose recognition or you could even um mean something to recognize whatever gesture you want yourself so if you took a bunch of pictures of yourself in one standing in one position and then took a bunch of pictures standing in a different position it could potentially differentiate between the two at a very very simple way to detect a gesture and also while we're on that the so that would be one way to do it using image classification but if you saw all the snippets we included we have a road following example coming out soon on and what this is actually doing is regression um so it has an image as an input and rather than determing a class label like blocked or free like you saw it's actually going to determine two values which are X and Y and these are the coordinates of some targets that we're going to label in the image so for Road following what we're doing is we're labeling basically we'll get we'll explain it in more detail when we publish the example but we're basically explaining the direction we want the robot to travel and how we define that is a little more involved but in general we're just pointing in the image where we want the robot to go and it's going there so that exact code instead of saying have the robot go to this coordinate you can instead maybe label your nose if you wanted to track your nose or your wrist if you wanted to track your wrist or something like that and right now this example would only work with one point you're just trying to track but the fact that you can create your own data set label it yourself training yourself and it doesn't require that much data to do is really powerful because you'll be able to incorporate in that and a lot of projects I think it's depending on how creative you get basically um one question we got is that they noticed everything is done in Python our other languages recommended as well such as Java and I so I'm personally for robotics I primarily use Python C and C++ um Jeff bot we should Python because it's a really flexible and fast language to prototype and which we think when you're experimenting with projects and you're trying to create something really fast you want that flexibility it also probably has the widest support for deep learning frameworks so tensorflow and PI and PI torch are both have Python API is that are really comprehensive and you can take advantage of all of this the scripting utilities that are available and other libraries like the opencv Python library hi so I'm most familiar with Python C and C++ and I think these are the most popular in the robotics community so that's that's kind of why we did that but you could definitely play around with trying out different languages um so another question we got is if you need to do PID control for the motors and moving of jet pods and I think that depends on what you're trying to do so for our object tracking we just do proportional so if you're not familiar with TIG control it's it's basically just a way to control some actuators to achieve some goals so for our object tracking or our road following examples we would use this see their track the center of the road or to track the target object we're trying to do so that it turns out to be really smooth I mean you can do whatever you want you could try something like move forward it's the robot if target is within this region and turn right if it's out of it and then try out adding proportional control to see if it makes it any smoother so in general I'd recommend just to start simple think about what your task needs like for our collision avoidance it's really just going forward and then when it doesn't see an obstacle it turns left so we're all the complexities basically in the deep learning there so you can start simple and then figure out what you need and go from there um so another question we got is what is integration with the our OS so if you're not familiar with our OS it's robot operating system and it's basically a framework and toolset for building pretty extensive robotics applications and so we made a port of Jaffa for our OS that will allow you to control the motors and control the motors and access the camera using using our OS nodes so if you're familiar arc with our OS and you just want to play around with Jeff bought Nardo us that's that's a great tool to have but for our Python libraries we're not actually using our OS under the hood it's all done in just plain old Python using kind of the maker libraries from Adafruit and that kind of thing can I see a video so the next question that I saw is how can I be a video of where the object recognition works and you can go to our the Jet box github head over to the examples and then there's a video of a demo and then if you search tensorflow object detection API you'll find more information about how to train the object detection model and I actually the last tutorial I did covered this exactly so if you search our tutorials you can find one on tensorflow and tensor RT and it will talk about how you can train an object detector and accelerate it with sensor RT okay um so another question we got was if there's any applications that we might suggest for a design project at a university like in computer science or Computer Engineering and I think Jeff BOTS the cool example I actually have a bit of an education background working in robotics in a college curriculum so a lot of that knowledge actually sort of went into the jet boat project and thinking about how it could be educational and what people can take away from it as well as the practical needs of kind of a college curriculum so Jeff bought itself wouldn't be a bad thing if you can find a way to build it so it'd make a lot easier if you have a 3d printer um so Jeff fought and then you can also start with as I mentioned earlier just the camera and a nano and you can run a lot of the deep learning samples to get people started learning deep learning training a model collecting data on a small on a small scale but the nice part is is really flexible and accessible so as far as like a specific application I think it depends what you're trying to teach and what you want the students to learn um but you can definitely adjust the Nano to get started with teaching AI in general and also I think having to live camera is a really exciting aspect because it's nice to interact with it and see how it does in the in the real world another question we got is how compatible is Jetson a know with the in I think this is intel realsense def camera so there are there already YouTube videos online about how to integrate the intel realsense camera Jetson Nano I think they're from Jetson hacks and so that the camera works on Nano as far as Jeff bought our software is all targeting monocular camera because our focus is on using deep learning and most pre trained models that you would use Tampa and most most of the theory there is all based around single images or not theory but most of the research there is based around single images so we wanted to transfer that and apply it so that's why we choose monocular camera for Jeff bot that said the camera mount for the can be removed so it's modular so you could 3d print your own camera mount for the intel realsense plug it into the USB you know it will work with the Jetson Nano and then depending on what you're trying to do you could go from there but yes um all right so I'm trying looking for our last question here I D all right so I guess the last question I'll I'll answer is related to training on a GPU farm for high resolution and large quantities of photons as I mentioned Jetson Nano you can do the training for this project at a small scale on the Jetson Nano um but once you have a lot of data or you're trying to process higher resolution images for training on you probably I mean you can definitely do this as the short answer is you can train on GPU clusters and tensorflow supports distributed training pretty well I think and pi torch also has utilities for distributed training I think but I my experience is mainly training on desktop machine with the GPU and on the jets and NATO itself ok thank you all for attending today's webinar and thank you John for presenting you did a great job there were a lot of great questions that came in that we weren't able to get to at this time so be sure to post those on the forum and on github to kind of keep the conversation going we've recorded this presentation and we'll have it along with additional resources available shortly on demand you'll get an email later this afternoon with a link to that information and don't forget to become an Nvidia registered developer to have access to the latest news on product software events and more and thank you all for joining us today and have a great day
Info
Channel: NVIDIA Developer
Views: 32,774
Rating: undefined out of 5
Keywords: Jetbot, NVIDIA Jetson Nano, AI Robot, Jetson Nano
Id: zOCSRzDUI-Y
Channel Id: undefined
Length: 61min 55sec (3715 seconds)
Published: Wed Oct 02 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.