Robot AI Demo - NVidia Deep Learning, ROS Navigation, Raspberry Pi

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Pretty easy with a turtlebot...

👍︎︎ 2 👤︎︎ u/BenLeBoss 📅︎︎ Jul 22 2020 🗫︎ replies

i'm surprised he didnt use ROS for his open dog

👍︎︎ 2 👤︎︎ u/sudhanv99 📅︎︎ Jul 22 2020 🗫︎ replies
Captions
this video is about building more intelligent machines I'm going to be looking at the Jetson nano the Raspberry Pi for the robot Estelle's we bought three which runs rows and does mapping and navigation using a laser sensor and we'll be talking about how I can build this technology in some my projects in the future so that I can make robots which can navigate and actually do useful things so most of the robots I build use a microcontroller which is just programmed with a USB cable of a PC and it does one task when it's booted up normally I'm using Arduino or I'm using the teensy which is programmed like an Arduino and that's been fine for now for basic motion control and hardware control but the Nvidia jets are nano and the Raspberry Pi four of course are actual computers with an operating system and that allows us to do much more intelligent stuff like machine vision processing mapping getting what sensor data in and also using ROS which allows us to use pre-built modules to make intelligent machines this video isn't sponsored but this is just a quick ad for ways you can support the channel and that really makes all the difference to the projects I have patreon and YouTube channel membership on those links are in the description below and patrons and YouTube channel members can get access to all the videos up to a week early which all being well means they've probably got next week's video already and also sneak peeks and pictures of what's coming up I have a merchandise store where you can get t-shirts bags socks stickers and various other things with pictures of things on I've made over the years and there's also some affiliate links in the description and if you use one of those links to sign up or buy something it won't cost you any more but I'll get some money all right let's have a look at the end video jetsam nano this is the Nvidia Jetson nano developer kits the Jets and nano itself is just this piece here which plugs into a socket you can buy these separately to build into your own projects but the developer kit comes with a breakout board which of course allows you to plug it into HDMI USB and Ethernet so you can actually see what you're doing it also has a 40 pin GPIO header which I understand to be the same pin outs as the Raspberry Pi and it also has two connectors on the other side which you can plug a Raspberry Pi camera into so they've done quite well on hardware compatibility now I've built my Jetson Nano into this little terminal looking thing with a couple of 3d prints and my Jetson Nano is at the back there we've got a battery and a voltmeter and a regulator so we can power it and I got screen off eBay which is an 11 inch screen and I'm using a USB keyboard and mouse these are actually the Raspberry Pi official keyboard and mouse out of the Raspberry Pi desktop set then we'll have a look at in a bit but yes it's an actual computer it boots Ubuntu 1804 there's an SD image you flash I've put a Wi-Fi dongle in mine so it's actually truly portable now with a battery in the screen so it's a bit like having a mobile computer and of course it runs Linux so we've got a browser and everything else we need now it's about the same spec as a Raspberry Pi for the Raspberry Pi for came out since the Jetson Nano but since then in video have launched as a viet NX which is much much more powerful than the jetsam nano it's quite expensive as well but it is basically a mini supercomputer and the reason to buy jets and nano is basically the entry point to look at in videos the learning models and all the other stuff are they published so that you can get a grip on it before you buy something more expensive so we can have a look at a bit of code to do some vision recognition using a deep learning model that's already provided as part of the framework for the Jetsons series the setup for the Jetson nanos pretty well documented on the Nvidia website up to getting the operating system on and I followed the first tutorial which is about doing a vision recognition example in less than ten lines of code and that's using a pre trained deep learning model that's already been trained on millions of images by Nvidia there's also a project page which has got lots of other stuff on including actually tracking a person's skeleton and that's using just a camera feed not using a depth camera like a Kinect but actually just a single monocular camera feed and another deep learning model that's been trained so we recognize how a human is posed so I'm going to be just using a USB webcam I've got a Logitech webcam that we're going to plug into the USB pool we're going to run that ten line of code example and see what we get out of it so first of all you'll notice I've got significantly more than ten lines of code this is the ten lines of code example but I've just botched an extra piece on the end here which is just an if statement it looks for a class index of one which is a human and it just types out to the term and although it's a human and the coordinates so basically I could put these things in my own variables and use them in my own code send them out by serial OSC to control one of my own robots so let's just go and run the example there and we should find it takes a little while to launch and then we should see how you plea that it detects a human which is quite good and you'll notice it detects me even if I'm obscured so if I go slightly out of the shot at some points it knows I'm a human again so that works pretty well it doesn't need to see the whole of a human it just needs to see a piece of a human and pretty much knows I'm a person but it takes lots of other objects which is really useful as well for robots and perception so it's already trained on lots of household objects so if I now come and turn the camera around look at the keyboard and mouse yep he knows that's a keyboard the white mouse isn't too good because it's white on white but the darker color Mouse it noses the mouse what else have we got yep a cell phone and a cup and you'll notice as soon as it sees a cup it thinks the tables our dining table and I guess that's because of the deep learning model was used 2 cups being on dining tables let's just take the cup away yeah now occasionally thinks the table is our bed with the cellphone on it and if I put the cup back if this is the dining table which would make much more sense so that's just an example of how the deep learning models work and I made this into a mobile computer for a reason so if I go and put it in the kitchen we can see that it recognizes all the common objects like the refrigerator in the oven it also recognizes most furniture like chairs and tables and in fact you can look at the categories that this code will give you there's 91 different categories all recognized and the overall image set has been trained on millions of images and around a thousand categories different breeds of dog and so on and you can retrain that fairly quickly using transfer learning so that you can recognize for instance only cats or dogs or build up your own conditions for recognition but this is of course quite useful you can imagine just putting this into a mobile robot and immediately having perception for a single-camera of the objects around it normal household objects and animals and people so the next thing I'm going to demonstrate is the turtle bot 3 from robotis and robotis made dynamics servos and this is basically a reference design robot for learning rolls which is a really important robot platform for robot clothing which I've been learning about so we're gonna do a demo first and then I'll talk some more about rolls but first of all the turtle bot is basically a little robot it's got two dynamics or servos that allow it to drive around on his wheels on a caster it's got an open CR border which is basically an Arduino compatible microcontroller board with an ARM Cortex m7 on it also has the interfaces for the dynamics or servos and various other things and it has a Raspberry Pi 3 B+ in it which is where Rawls actually runs on top of the Raspberry Pi operating system this one's also got a laser scanner on the top which is basically used for mapping the environment and this one I've also installed a Raspberry Pi camera on it's the version 2 camera because I wanted to tinker around with it but that's not stock so first of all we're going to do a little demo of what we can do with it and then we'll talk more about what Rawls is I'm just driving the robot around remotely here using a keyboard teleoperation program and that's running remotely over the network on my laptop so we're just gonna drive the robot around and then we're running something called G mapping which is actually drawing a map of the robot surroundings and it's doing that by using the laser scan data and also the wheel Adama tree that tells it the pose of the robot and that means how far it's gone forward and how far it's rotating and the uses that data combine to position the robots locate it and basically draw a map of everything in the downstairs of my house we'll just speed that up a bit we'll go out into the hallway and we should find that it completes the map and now we've got the map we can tell it to navigate so I can use the GUI to put down a big green arrow it turns red when I let go and then the robot should plan a path and we should see a wiggly line there of the path of robots going to take from the kitchen all the way through the doorway and out into the hallway and you can see the red arrow is now pointing the other way so when it gets there it should adjust this pose as accurately as it can and it should turn around to face the other way and you can see on the map there's a sort of green fog following the robot those are actually lots of little arrows and that's basically every possible pose it could be in and is solving where it actually is as it goes to get the best estimate here we go so should turn around and face the other way and now let's have a shorter one towards the bottom of the stairs and again the arrow is pointing the other way so when it gets there it should turn round and face the other way so that seems to be working ok now let's plan a path all the way back into the kitchen and again we'll have it face us when it gets there it seems to plan the path pretty well okay obviously it just threw the map and it knows where it is and it's constantly relocating itself as it goes to get the best estimate so not too much trouble actually planning that path and adjusting its pose to the right position when it gets there that's quite useful if it had to have a manipulator on say which would then do something when it got there so now we're going to try and confuse the robot by adding something that's not on the map I'm gonna plan a path into the doorway and then put an obstacle in the way and of course that has to be high enough it can be seen by the laser scanner and you can see there it dynamically adjusts the path you can see that block is now on the laser scan and it should complete its path avoiding the obstacle into the doorway so for the next example we're gonna try moving the obstacle while the paths already being planned and we should see it dynamically drives past it you'll also see me on the laser scan so it can deal with temporary objects and of course those disappear when they go out of range so we should see the robot deals with that quite well and goes and meets its target that's pretty basic rules functionality out of the box when you build a turtle boat or you build a basic rolls robot but the turtle by itself is totally reliant on rods for all of its functionality as I said at the beginning really this is a rose training robot to get to group 2 rows and the barrier to entry seems quite high I've been doing various have a reading and various other training but having the turtle ball and actually video to build it's being quite useful Rose itself is open source and I think really the best way to learn Rose is probably to build a robot with Rose on it and try and everything from scratch so the next task for me is to try and build my own robot hardware instead of using the social bot and try and get the same functionality up and running and then use rolls in future projects ultimately though ROS is a development framework for robotics developments most of the functionalities built around a system of nodes which can either run on the same computer or over the network so in the turtlebot example I'm running the nodes that control the wheels robot odometry and the laser scanner data on the Raspberry Pi in the robot and I'm running our vis the ROS go e and also the mapping and navigation modules on my laptop there are many many other nodes that already exist various mapping and navigation system nodes that talk to various sensor hardware and other nodes for completing robot tasks some where we need to have the raw score running in this case is on my laptop and that's a bit like a directory that knows register with so they can discover and talk directly to each other the tercel wall has really good instructions for getting ROS installed and up and running and these are free to read on the robotics website and robotics have also published a free book in PDF format which is all about the concepts of rolls and rolls programming I've been learning about rolls from an online course at Robo ignite Academy it cost money but it does have a full simulation suite and runs completely in a web browser however the best way to get to grips with roz as I say is to build your own robot and that's going to be the next plan so Ross generally the nodes have to run on a computer like a Ubuntu machine or a Raspberry Pi or the Jetson Nano but you can actually run ROS nodes kind of on an Arduino provided it uses the ROS library and it's connected to a computer running the raw serial mode so let's have a look at that next let's get the Raspberry Pi out on an Arduino and see if we can make that work so I've got the rod Spri pi for desktop kit and that comes with the official Raspberry Pi keyboard a mouse the Raspberry Pi for in an overly local case it comes with a three amp power supply suitable for powering it it comes with two micro HDMI cables is actually two HDMI it out on the Raspberry Pi four and it also comes with the Raspberry Pi beginner's guide that has all sorts of information in here about programming in various languages and doing electronics and interfacing and I've also got Edition the Raspberry Pi 7-inch touchscreen which we're going to have a look at right now so here's the touchscreen out of its box which looks very nice on the back there's a board which actually has an HDMI socket but it's got these mounting posts here to put the Raspberry Pi 4 on which I've now taken out this case it does have a connector here which we can connect directly and that cable is provided for the display and there's some other wires in the bag which I believe for power and I squared C so that the mouse interface the touch interface will work on the Raspberry Pi so get that mounted and we'll boot it up so I've installed the Raspberry Pi and the associated cables and I'm just precariously balanced it on the back there but this seems to be working ok it's quite a crisp and clear display and I can move the mouse pointer by using the touchscreen so if I click on something you know we can open up all of the programs and so on which is quite useful now I've got Ubuntu mate running on here even though it's a Raspberry Pi 4 and there isn't an SD image and I'll put a link in the description about how you can install actual Ubuntu on the Raspberry Pi 4 so I've plugged back in the keyboard and mouse so I can actually type and I've also plugged in an Arduino Uno and the Arduino who knows got some example code running on it which comes with the Ross library and that enables us to talk to a simple switch now on another machine I'm actually running the raw score and this is a Ubuntu desktop machine and on that one we can have a look at the topic that's published by the Arduino code and that's talking to the rods serial node which is running on the Raspberry Pi so if we now look at what roles topics are published we should see we've got one called pushed as well as the other standard ones that get published when you start the core up and a topic is basically a pipe that connects the nodes together and over the nodes go messages so if we now echo that node called pushed we should be to see the messages that go over it and we can see that as we click the switch we should get a boolean of true or false and this is just a really basic example but it shows that we've got basically a node running on the arduino talking to the raspberry pi and over the network we can view that data on another machine all using rose as the framework and my plan for a raspberry pi for the touchscreen is to build a robot remote with a GUI interface that i can write in Isen talking to a rose node to control the robot so basically I'm going to use two of these three axis joysticks that are using a lot of other projects and build it into a nice case so that we can actually make a remote control and hopefully that'll eventually replace my current universal remote control that I use in all the projects which is just an Arduino and an NRF 24 lo1 so hopefully we can make all those robots ROS aware and use Wi-Fi to control them reading switch data isn't very exciting but it does go to show how we can interface to a robot hardware using an Arduino and get that data to and from Rose so I built this robot awhile ago which was the nerf blaster robot and this is pretty much an ideal candidate for being converted to a Rose robot we just need to power Raspberry Pi it already has an Arduino which talks to the motors and it talks to wheel encoders and it also has a laser scanning rangefinder on it which is really similar to the one on the turtle BOTS and that one already has a rose mode from the manufacturer so we can read that data straight into Rose so it should be easy enough to make it be operated by a teleoperation program just receiving the command velocity topic and those messages to move the wheels and after that we just need to get the laser data back in the Adama tree data back and we should be to use gee mapping and we should be to use the navigation stack now we could also integrate the jets and nano deep learning models so that we could have it actually look for a webcam so you could go around its navigation path from point to point perhaps on patrol but then when it sees a specific category of object we can make it do something like shoot it with a nerf blaster so there's lots of possibilities there to immediately make a more intelligent robot I'm going to be doing that hopefully in a new series coming up and showing all the steps to getting that working provided I can actually achieve it so don't forget to Like and subscribe for more details on that project and lots of other projects and he likes to support me through patreon or YouTube channel membership then those links are in the description below alright that's all for now [Music] [Applause] [Music] you
Info
Channel: James Bruton
Views: 125,380
Rating: 4.9528956 out of 5
Keywords: robot ai demo, ROS robot, ros navigation demo, ros turtlebot 3, navigation and mapping, lidar mapping, slam navigation, autonomous robot, robot that can see where it's going, robot that can drive a deestination, navigating robot, robot testing, ai demo, vision recognition, deep learning
Id: U0--ZJfmUEM
Channel Id: undefined
Length: 18min 1sec (1081 seconds)
Published: Mon Jul 20 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.