NVIDIA Jetson: Enabling AI-Powered Autonomous Machines at Scale

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] today I'll be talking about videos that's in platform and how you can use it to build and deploy AI enabled autonomous machines at scale I hope you enjoyed what seeing the video and got a sense of what you're going to learn about today let's first understand what is driving the growth of AI at the edge the improvement in sensor technology and economics has created a boom of connected devices also known as the Internet of Things these devices are making productivity improvements and enabling use cases that were previously not possible such as autonomous mobile robots for warehouses and last night delivery voice and perception enable the system smart and safe cities and predictive maintenance several of these devices need AI at the edge because of one of the four reasons first the IDA generate a huge amount of data but making it prohibitive to stream to the cloud for processing second they're operating in environments where reliable connectivity is not guaranteed third they need to act in real-time and take control and cannot tolerate the latency of cloud and fourth is that the data cannot leave the device for a lot of these applications because of privacy reasons the number of IT devices are growing exponentially with forecast estimating 150 billion devices by 2025 and over a trillion devices by 2030 presenting a huge opportunity for AI at the edge so as we see companies building and deploying solutions for AI at the edge we are learning about a lot of unique challenges that are very different from a traditional embedded computing system and requires a fundamentally new approach development and deployment so let's look at some of these challenges at the first challenges that accrue in diversity and complexity of neural network models due to the rapid advancement in the field of AI we see new Morgul architectures emerging almost every day and to keep up with this fast pace of innovation it is important that you choose a computing platform that is flexible to support the model of today and it's prepared to support the model of tomorrow the second challenge is that inference it's usually a piece of complex application pipeline I'm building a compelling solution requires accelerating the full stack we often see companies choosing an architecture where they use an accelerator for the AI part of the pipeline but quickly realized that the functions before and after inference become the bottleneck and they are limited by indoors law therefore it is important to accelerate the entire pipeline and not just the air algorithm lastly since AI devices essentially software-defined systems that get better over time as they gather more training data they need to be connected monitored and updated regularly therefore it's important to have modern DevOps tools and workflows to scale a solution from a handful of devices and pilots to thousands in production this is the last slice the complexity and accuracy of air models is a rapidly evolving the slide helps you visualize that explosion with some of the popular and widely used models the Left chart you can see that in just over five years the model complexity has increased by orders of 19 but newer models like bird for NLP have 340 million parameters and gbt to 8 B has a point three billion parameters compare that to the early image classification model such as Google Maps there are just six point seven million parameters on the right you can see various computer vision models their accuracy and the computer requirement again we see a large variety of models and solutions that are reaching even superhuman level of accuracy in some cases to address this growing market of AI at the edge and to solve the various challenges that I just mentioned we created the Jetson platform immediately different platforms have designed for creating software-defined autonomous machines it's an open platform that has been widely adopted with over 450,000 registered developers and growing the distant program platform comprises of three main pillars the first pillar that founds the foundation of a platform is a CUDA GPU enabled Jetson computer the second pillar is the extensive software tools is the case and libraries and the last fader is our ecosystem we'll cover each of these in detail in the following slides let's start first with a different computer the distant computer comes as a system and module designed from the ground up with high performance Nvidia is the C and with embedded GPU for computer iOS and memory required to build products with AI at the edge they come in various power performance and form factors to cover a wide variety of applications the most important thing about the Jetson modules is that they all have the same software architecture enabling you to leverage your software investment across different product portfolio at the entry level we have Jason nano which offers half a teraflop of performance and operates in half yet less 5 watts of power ideal for designing solutions that require processing a couple of models and sensors on the highest end we have the Jason 80 Xavier which offers 30 trillion operations per second of DL confused and 30 watts for designing complex fully autonomous systems such as outdoor delivery robots which require running several models for perception and control in real-time and processing high bandwidth data from seven census the newest member of the Jetson family is the Jetson Xavier NX smaller than a business card 20 trillion operations per second of air performance and just 15 watts of power makes Xavier NX the smallest and most powerful supercomputer for AI available today in the market the combination of its small size and high performance is enabling a wide variety of applications such as portable medical devices autonomous drones and automated optical inspection systems as I mentioned before Jetson is designed from the ground up for autonomous machines and hence offers a lot more than just an AI accelerator you can see here the various subsystems available in the Jetson ad Xavier it includes an easily programmable cuda enabled GPU that can also be used for graphics or other action related computing functions there is sensor course for AI frosting and a deep learning accelerator focused on providing high efficiency convolutional operations for deep learning there is also a high-performance multi-core arm CPU for all your cereal dust a multimedia processor for functions such as encoding decoding scaling resizing and collect inversion we have an image processor for processing the rock a pixel data from the image sensor a vision processor for common computer vision cells and high-speed iOS such as USB Ethernet at PCIe and connecting all of these processors is the unified memory architecture which and truly enables parallel processing with minimal latency and maximum throughput in addition to the system and modules we also offer developer kits that bring God most widely used iOS and these operate as standalone and as computers they can be ordered directly from Nvidia website or eat a lives around the world the second pillar of the Jetsons platform is the extensive software stack let's look at the different components of that software in detail just fact is the foundation will come a comprehensive SDK that includes everything that you need to start developing on Jetson it includes an Ubuntu based Linux operating systems that is provided along with a complete kernel source and tools for enabling going to production with things like fusing encryption and secure boot to attach the various sensor and peripherals it includes drivers for commonly used iOS such as PCIe SPI I squares I just see maybe CSI and more for multimedia processing using the onboard hardware accelerators that I mentioned earlier we provide common API is for tasks such as encoding and decoding for various formats and scaling and from format conversion for graphics we provide support for Vulcan and open geo and x11 valence lip DRM are supported for display but on the underlying CUDA X type we provide libraries for deep learning such as tensor are key and qu d NM libraries for computer vision such as vision works and other fundamental actually rated computing libraries such as Googlers in Korea 50 another key component of Jeff axe is the development tools that run on the horse and enable developers to cross compile profiles and debug their application providing all the key components together as a single SDK really makes the developer's life easy and makes it easy for them to develop and go to production fast without having to spend a lot of time resolving dependencies and putting the tools together immediate denture RT is one of the key libraries for AI that's included in jetpack and supported on Jetson tensa party includes a compiler and the runtime that allows developers to bring their models paint on any of the popular air frameworks such as tensorflow and phi torch and detriment them on the edge with minimal memory footprint and providing Heights performance leveraging the underlying architecture of the product Center RT takes care of tasks like quantization layer fusion kernel Auto tuning multi-stream execution and more so that developers can quickly go from training their models to deploying them on the edge without having to jump a lot of Hoops understanding what they need to do to optimize it the seamless workflow from training to inference enabled by Nvidia unified architecture is extremely critical to the success of AF products due to the inherently iterative nature of the development and deployment process of a as project this chart shows some of the data highlighting the capabilities of tensor RT we want a diverse set of benchmark models for classification detection pose estimation segmentation image enhancement and even models like birth for natural language processing can all be accelerated and run in real time with ten zlotys running on the edge with jetsam these models were trained with different frameworks they have different neural network architectures but they all can be compiled and run on the same destined device with insularity while these benchmarks are great to understand the compute capability of a platform and its flexibility they are far from sufficient to choose the right platform for your product so let me go and explain why you need to look beyond the benchmarks when choosing a product for air practical application ái inference is fundamentally a multi-dimensional problem the first dimension is the type of model and we saw earlier the research in AI is progressing rapidly and we see new model architectures emerging frequently if we choose a model today it is moral most likely that in six months you will want to update with something different that offers better accuracy and performance and it's important to choose a platform that can accelerate every module second dimension is the input resolution the benchmarks you saw earlier often show the performance for very small resolution inputs and then that were used for imaging the challenge or other academic research but in practice you need to iterate and explore what input resolution will work for your problem for example you cannot detect a sub millimeter mic manufacturing defect if you scale your input input to a 224 by 224 resolution which is what was used by a lot of benchmarks the third is the choice of framework the biggest challenge today for companies developing air projects is finding air developers so if you limit your solution to a specific framework your clue is going to close the doors to a huge talent pool for it's a bad site or stream processing so many use cases you need to process more than one sensor more than one input and the benchmarks show great performance for a single input but when you add more inputs the performance does not split across evenly and the last one is quantization while it can really help reduce the compute needs it often requires a lot of effort and is not feasible for many cases one example for this is a customer who needed to distinguish a beam from the crops and given the minut feature differences they absolutely has to use floating point 32 to get the desired accuracy so having the flexibility to iterate across these multiple them is important and the Jetson and NVIDIA platform you have this flexibility to explore and they trade across all these dimensions to build the best product in the shortest time the flexibility of the jetsons art form was it's also exhibited in the latest ml4 in France benchmarks that was published recently Emily first is a third-party benchmark where almost all commercially available solutions for AI competed and jetsam tops the performance across a wide variety of workloads and scenarios in fact jetson was the only platform that could even run all the scenarios and workloads in the edge and mobile associate category an important thing to notice here is that even though we top the Resnick 50 off line performance with two hundred thousand 158 images per second it is practically useless for anybody trying to choose a platform there is because there is no edge device that means that performance at that resolution however what is important is that due to the flexibility and programmability of our architecture we are able to translate that performance to more practical scenarios like the real-time object detection for a 1200 by 1200 resolution input which is very practical and important use scales for autonomous vehicles that need to look far out and need a higher resolution input and more pixels to detect a person or an object far away so that capability to translate the performance across the different scenarios across the different model types is extremely important and for when you strive to deploy AI or in practice when anyone interested in digging deeper all the source code and the results of this benchmarks are publicly available on the ml per of github coming back to the distance offer I'm in this cursed one of the challenges of destroying AI at the edge is that accelerating the inference alone is not enough and we need to activate the full pipeline to achieve the performance goals built on top of the jetpack SDK we have the deep stream SDK deep stream were designed to activate the full stack required for streaming video analytics it's a modular SDK that allows developers to build a highly efficient pipeline for intelligent video analytics or IPA what you see here is a typical IV fi plan constructed with deep sink plugins and at the bottom it shows the underlying hardware that is used by the plugin by accelerating each function of the pipeline and leveraging the hardware architecture to move the data without any memory copy we are able to achieve up to 5x throughput of what one would get if you were accelerating only the inference several of our customers have successfully used this as the case for analyzing data from smart cities building perception for robots and customer analytics for retail for a deep drive on deep streams please remember to watch session to to100 from my colleague Jensen super since that was presented also in digital GTV similar to deep stream we have created Isaac's SDK for robotics Isaac includes an engine that lets you build your complex application graph in a modular way and includes a variety of gems that provide common algorithms and EF functions required to build mobile robots and tools for visualizing and debugging the different gems are mapped to the different appropriate hardware on the system for acceleration and the engine enabled efficient data flow what you see here is a graph of one of the sample applications that we provide with Isaac called Carter it uses various gems such as localization mapping obstacle avoidance path planning to build an autonomous indoor mobile robot so to summarize the jet platform comes with an exhaustive set of software that enables easy and fast development of AI enabled applications for smart cities factories logistics health care agriculture and more the injection is used to build software-defined autonomous machines we regularly shipped new software releases that bring new capabilities new features and high performance to the pressing portfolio these relievers come out roughly about every six months in this slide you can see the major software releases since jets and tx1 was announced hills of one side while most devices tend to get slower over time beyond the contrary have almost doubled the air performance of tx1 it just software updates in the three years since it was launched in 2016 and this has only been possible because of our unified software architecture and a huge investment in AI the third in the last failure of the Jetson platform is our ecosystem our ecosystem partners are a vital piece of a platform as it provides support to our customers every step of the way to enable faster time-to-market whether you need an off-the-shelf system for initial prototyping or a custom design for mass production our ecosystem partners can help you and you can see from some of the logos here our ecosystem spans across a wide variety of functions and region in addition to the sensor and hardware partners we also have an extensive software ecosystem that includes ice weeds that can provide air solutions for specific verticals cloud service providers that provide cloud integration device management and monitoring solutions another partner that provides custom optimization and integration integration services for embedded systems finally we have a distribution partners to enable global supply chain and also provide local FAA support to our customers the depth and breadth of our ecosystem is a big reason why our customers are able to develop and deploy complex products in March to the market in record time building on the three pillars of the Chasen platform we are seeing a huge success and deployment of air solutions at the edge across a wide variety of use cases and industry some of which are shown here these solutions are not only helping our industries to be more productive but also making our world safer healthier and happier given the current times I would like to call out a few of them this is is tx2 is being used by its company Nano for to perform DNA sequencing in the field delivery robots like jadx are using this and xavier to provide supplies in areas of quarantine and service robots are lending a hand to patients and doctors and hospitals and a demonstration of some of these success was shown in the video that's that was broadcasted earlier in the star we have made a huge investment in adjacent portfolio and have a solid gourd nap of products planned for the future all products in greens are available or announced today looking out into the future in 2021 we are planning a nano next for the entry levels which will be followed by the orange based products for both mainstream and the top-end in 2022 these are all instantiated and subject to change similar to the commercial skills we plan to continue delivering products on the industrial for the industrial environment which require high reliability operating life and supply for up to 10 years policy x2i we are planning to launch the industrial version of a nice Xavier in middle of 2021 followed by Orion industrial in after 2022 so this one has been adopted very widely from a wide spectrum of developers ranging from high schools to startups to Fortune 500 companies and this has been possible because it's an open platform with a lot of resources to learn and create you can order developer kits from Nvidia comm or retailers around the world there are dozens of high quality open-source projects that have been created both by Nvidia and our developer community these projects provides you a springboard to jumpstart your development the Jason forum is a key resource where in videos dedicated technical team along with the ecosystem partners and the community provides you answers to your questions and share the experiences the DLI and the GDC also provide opportunities for completing courses and workshops to advance your learning on air so with that I would like to thank you all for attending this start today and for being a part of CTS a digital and we will now open up the line for any Q n is the question on expansion of teraflops that stands for trillion floating operations per second the number of trillion operations per second there is a question on the terror ops for nano 4 into 8 nano only supports floating point 16 so it does not support int it there's another question what is kernel auto-tuning if you're committed with the wage GPU and the cuda architecture works in terms of accelerating performs performance you have to create kernels to automate it for acceleration and depending on your modular architecture and the layers and also the GPU architecture that you're running on there are different kernels that can be created to accelerate the model so what can show RT does is looks at your model looks at the underlying GPU architecture and does a automatic calculation or what will be the best kernel to give you the maximum performance for that particular use model this question about what is the complete system like this cost range it's not clear what system you're talking about but if you look at our developer kids you know the judgment nano is available for $100 and Justin they wear retails for 700 u.s. dollars there's a question that when the NX module is going to be available we are targeting middle to end of April so and sometime next month for NX modules to be made available there's a question which says since it's getting more and more complex how do you keep the overview well the good thing about you know the justin platform that it's all a unified software architecture it's all a software-defined system so any time that you're spending and learning and building your expertise with the current generation of products and software it will naturally translate to the next product so there is no new learning to be done as new complex solutions continue to come to market if the new Jetson Xavier available as a developer kit at this point we have only announced the module for the Jetson it that's an Xavier annex and it is compatible with the Jetson nanos developer kit to the question and which one of the deep learning development environments are better suited for ten Sorority since the flow PI torch or something else well as I mentioned the beauty of the Jetson and NVIDIA platform is that we support every every single framework that is used today across the industry and within these frameworks there is some level of standardization that is happening where they're all trying to converge to onyx as a format for exporting so we see that becoming more and more of a norm and we are continuing to invest in that space but to answer your question you know you choose the platform that you are most familiar with well distance afloat by Taj and we will we will provide deployment for that on the edge there was a question on what are the numbers fps I assumed this was the question for the benchmark chart that I had shown and yes for all the image benchmarks the number was in FPS and we had to charge for natural language processing and that was the latency of the query for the NLP algorithm you the question what circuits are available for jet and nano I need some more insight into what that means so we'll reserve that for later what was the session number of the deep streams deep dive let me quickly it's not to worry about that you can keep the otherwise this this topic there's a question if there are any hardware vendors that have 5g modern support for the Nano if you look at our ecosystem partners you know and go to our webpage there are several hardware ecosystem partners that have been identified and I believe there are few of them that have 5g modern support for the Nano carrier board there's a question on which type of smart city we have a project with Verizon for example that has been publicly starts discussed and it's being deployed in Europe and in some places in the US where they're now using understanding the occupancy and based on that doing smart lighting and as you can imagine you know there are large period of time when there's nobody on the road and so keeping the lights on when there is no activity there's nobody is a huge cost for the city so solutions like this and traffic monitoring solutions understanding the traffic flow based on that controlling the red lights and designing the intersection is another area that's being explored in pretty much every country at this point how long or years given Jason modules issues new update from Nvidia for instance new firmware probe into well in terms of new updates as I mentioned destiny x1 was launched in 2016 and we did make an update up to 2019 for the tx1 and we make a decision based on you know where our customers are in terms of the lifecycle and based on that decide whether it makes sense to do a new update or what level of updates are needed but as you can see we've been doing releasing updates for tx1 till up til up to 2019 can you talk about any applications for tiny yellow well there's there's a lot of applications for tiny yellow based object detection whether it's robots trying to identify obstacle in its way or whether it's not you know traffic camera trying to count the number of cars it's been used in a wide variety of applications again there's a question on availability of jets and NX and for those of you missed it it's again targeted for April mid April end of April on just a nano would encoding for gates a defenceless per second video stream to close to 10 watts or 5 watts just if you're just doing a 4k 30 encoding I would expect that to be more in the 5 watt range I'm Justin Anna what is differences in the peak FB 16 flops between 10 watts and 5 watts operation so the 10 watts operation provides about half a teraflop and at 5 watts you get roughly close to 70% of that performance there's a question about the energy consumption of objects and products so that's a good question and one that I would like to take a couple of minutes to explain so when we talk about the Jetson products and the power consumption we take into account the entire computer it's it's PSoC if the power management sdram it's the iOS all of its put together and if you compare to an actual rate or another chip on the solution but the power number that you see there is for a particular rail or a particular IP within that entire complex system so for example if you look at just a nano and which is a five to ten watt product and if you look at how much power is consumed by the GPU alone it would be more like two to three was so so they are inherently very power efficient going into Xavier we have also introduced DL a which is again a highly efficient accelerator for deep learning so with jetson the power that we mentioned as products is the full system power but if you go down and look into the details every IP is power optimized so I don't see that to be a problem with most of our customers once they understand what is the true power of each of those IPs and what the system level power is is to support for Google environmental sensor on tits and nano I'll have to go back and check but I believe if they are interfacing or one of the iOS like I square s or i2c they could be they could be supported what is the absolute maximum power usage of TX tools and savior's in the maximum mode so there are two things to keep in mind the power that 40 X 2 is roughly 15 to 20 Watts and for first name here it is about 30 was and sees all that CDP limits and so when you're running in Max and modes you could easily exceed those but given the TDP constraints we don't expect anybody to use a more than 30 watts and another good thing at Nvidia is we dog food we used a lot of our products ourselves we used a weird to build robots ourselves to build self-driving cars so we also evaluate and understand what is the real requirement so for most autonomous object for Xavier 30 watts is what you should be able to use and get a meaningful performance out of Xavier is there a production really carrier board for jets and nano annex that we can use in production this the ecosystem that I mentioned in my presentation has a huge variety of carrier boards so then they are available today for just an Nano and as soon as listen Xavier and it gets launched they are all ready to ship production very boards for you to go to market really fast so the question from on other any recommended or supported our cause for the Jetson lino so I mentioned that in my slides starting with the tx2 and a miss Xavier we do have a dedicated r5 r5 subsystem and we provide free are thoughts that run on that subsistence if it's looking for more real-time kind of operating system for the main CPU complex we have some of our ecosystem partners that I've shared one of our ecosystem partners has OS called Red House which provides real-time patches on top of open to so it's possible to get more some of the real-time capabilities are on the roof on the BSP that we provide I would definitely go and look into the Red Hawk if you're interested in our thoughts when can we expect the next different product after Xavier I guess maybe you've missed my earth slide on the roadmap we are looking at Orion based products in 2022 again we would have both a mainstream product and high-end products based on orange and in 2021 we are looking at the next version of this and narrow can you recommend hardware partners for airborne communication for video streaming to ground systems I would I would again look at their ecosystem partners and contact them I know as I saw in some of these videos justin is being used for a lot of drones and aerial vehicles and so they have solutions that are being used for streaming of videos for those applications so definitely check with our ecosystem partners I'm pretty sure they have something for you then we're looking to support more industrial protocols and l14 medium libraries i modbus and CIP this is something that we have you know thought about and at this point our plan is to again let's let our partners who are serving these particular industries they have a lot of expertise in this domain and they are fulfilling these customer needs for projects and customers so at this point we are planning to continue to lean on our partners for providing these protocols are not fun on the justin product how long will the Jetson products be supported generally the GPUs are supported for few years and the problem problem management jets and products are used for longest so the Justin products are designed for the embedded market so we understand the lifecycle of those products have to be longer so every product in that commercial segment is to be available for about five years of supplies and if you need further extended lifetime our distribution partners will be happy to have that conversation with you we know of other customers who have had this discussion and negotiated agreements to continue to provide end of life support through our distribution but from a media side we are committing to five years at least for the commercial products and a minimum of 10 years for industrials q what's alarming Gardens Isaac use so today Isaac has to slam on your algorithms one is using the cartographers slam lidar base lamp and then we also have a gem for visual inertial or the mystery so you can find details for that on our Isaac developer page all the documentation and details of different algorithms is available there and you mentioned some of the benchmarks of scale for small files how would you use limited hardwood capability for larger input size partitioning and scheduling the compiler assists there how does partition the multiple views yeah so as I mentioned you know the benchmarks that you see often are using very small input resolution and the reason is that a lot of them have architectural dependencies that the weights have to live with in the onboard memory and however with jetson and the GPU architecture there is no such no such limitation as long as you can you have a model and that can fit into the space for the onboard memory as well as either or the swap you should be able to process it on tests and your latency that you will see for AI entrance will vary but there is no fundamental architectural limitation that limits you to only process data for certain input size and that's extremely important as I mentioned in my presentation to be able to experiment with that resolution because what resolution will be able to detect the detail that you want is something that you can only arrive at after doing some it's iterations can the savior NX and hgx be used for training or just for inference well the short answer is yes it can be used for training and the reason being that we are using the same cuda GPUs that you see in a you know a thousand dollar service DDX server so the same GPU architecture is available on Jason's right so that's that's the reason you have unlimited capabilities on what you can run what kind of models can be run and so you can do training but of course it's going to be limited given the model compute that's there in terms of the time it takes to train something Haugen if you're looking for on the fly incremental training that's something definitely that can be done on the jetsons and we have senior partners with solutions around that as well I've heard reports from customer the 16gb xavier is no longer available in the US is this phased out reverse 32gb that is correct based on our again since we use our own products and our feedback from customers we decided to upgrade the memory on the just an index Xavier for 16 GB to 32 gigabytes and there's been no change of price associated with this so essentially it's just more value for our customers and they can run more neural networks more easily with the togedu gigabytes of memory now comparing to the Jetson nano I would like to know if OpenCV performance will be better on the Saviour and X necessarily well OpenCV has a mixed set of function so if you're compiling OpenCV with cuda support there are some function that can leverage the onboard GPU and then there are others that leverage the native arm instructions for acceleration since a Burin excuse a newer version of arm it has more number of arm CPUs more frequency and also a new version of GPU you would expect to see a lot of improvement in performance for OpenCV as you go from nano to Xavier NX when there became Gigabit Ethernet support in the future yes I think that's definitely something that we are looking into and I hope we would be able to add that in the future getting pallets there's a potential RT compiler require an additional tool chain or is it standalone so transfer RT is as I mentioned built on top of CUDA and crew dnm so it does require you to have food a foodie NN and beyond that it's a standalone tool that can be run on the device for compiling and runtime why can't tensor RT support other targets microcontrollers in a few years well tensor IP as I mentioned there's a lot of operations some of which are possible to be translated across architectures but as I mention in my slides one of some of the things like Auto ordered 200 tuning and the and layer fusion are things that are done by the compiler keeping in mind the underlying hardware architecture that is running on and so we of course understand very well our GPU architectures so that's already is designed for that reason how the exploration of the different dimensions we mentioned is possible with jetpack so I believe the question is regards to the different dimensions of AI that I mentioned well there's nothing special that you need from jetpack for that given that our software libraries are designed and are flexible to support all these different dimensions you don't need anything additional jetpack and everything in in compensated in it to provide you the ability to explore and a trade with the different dimensions that I mentioned what's better to focus on learning 5 Roger tensor flow well whatever you find easy they're they're all great AI frameworks you say that they were and X is compatible but just in nano developer kit does this mean and X module can be fitted into the Nano base board the answer is yes so if you recall a few months back we released an update to the Jetson nano base boards so we had a new version of Justin nano so if you're using the newer version which has and the easiest way to know that it's the new version has two camera connectors on the carrier board so if you are using that version of Jason nanos and yes you can use the saver and X module and put it into the nano carrier board our benchmarks run only on synthetic data the data that I've showed was actually none of the benchmarks when run on synthetic data you can run it on actual images and we do provide some samples of doing that in our some of the tutorials that I mentioned like two days to a demo or the Jason Vargas leave and I think we are coming to a close to the end of the session there are a lot of unanswered questions we'll try to get back to those and you know post the answers so I would like to again thank you all for joining us today I hope you enjoyed it and learn something and continue to attend other sessions for the GDC digital
Info
Channel: NVIDIA Developer
Views: 10,760
Rating: undefined out of 5
Keywords: autonomous machines, embedded, AI, edge computing, IVA, industrial, hardware, SDK, NVIDIA Jetson, Jetson Nano, Jetson TX2, Jetson AGX Xavier, Jetson Xavier NX
Id: XoZKY6stOSk
Channel Id: undefined
Length: 54min 4sec (3244 seconds)
Published: Wed Jun 10 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.